Designing collective intelligence at the age of AI
1. Skills are only part of the picture. What is often overlooked is cognition.
Modern organizations increasingly promote two ideas:
- “Everyone must upskill in tech, data, and AI.”
- “Data scientists must become business partners.”
Both sound reasonable.
Both are cognitively incoherent.
They confuse collaboration with convergence, and integration with assimilation.
What is at stake is not mindset or willingness to change, but the integrity of distinct cognitive regimes.
Business expertise and technology / AI expertise are not two points on the same learning curve.
They are built on different forms of knowledge, evolve at different speeds, and obey different validation logics.
They are orthogonal epistemologies.
2. Two rationalities, two truth regimes
Business expertise is based on situated, partially tacit knowledge:
- accumulated slowly,
- shaped by real constraints,
- validated through robustness under exceptions.
Its rationality is ecological:
- heuristic,
- conservative by necessity,
- intolerant to “mostly correct” solutions.
Technology and AI expertise is based on abstraction and recombination:
- fast learning cycles,
- reversible experimentation,
- probabilistic validation.
Its rationality is analytical:
- model-driven,
- exploratory,
- tolerant to approximation.
When organizations ask business experts to “think like data scientists”, they erode robustness.
When they ask data scientists to “think like the business”, they kill exploration.
The result is not alignment, but cognitive dilution.
3. Cognition is distributed — not blended
At the age of AI, cognition is no longer homogeneous:
- humans and machines,
- judgment and abstraction,
- constraint and speed.
Performance does not come from hybrid individuals,
but from well-designed cognitive architectures.
This requires making cognitive sovereignty explicit.
4. Cognitive spaces — non-substitutable by design
A cognitive space is a domain of legitimate authority over meaning, truth, and decision.
It is not a role or a skillset.
Effective transformation requires three explicitly separated spaces.
| Space of meaning (non-negotiable) Owned by business experts. Defines: – purpose and intent, – constraints, – exceptions and implicit rules. Situated, slow to stabilize, partially tacit. Never coded. Never refactored. | Space of formalization (hinge) The only legitimate interface. Produces: – hypotheses, – models, – indicators, – partial representations. By nature incomplete, reversible, debatable. | Space of execution (technology) Owned by tech / data / AI experts. Implements: – systems, – pipelines, – tools. Abstract, composable, optimized. Makes no decisions of meaning. |
5. Expertise ≠ ownership (cognitive responsibility)
A critical distinction must be explicit:
- The business expert is not owner of delivery.
- The tech expert is not owner of meaning.
Each space has:
- non-delegable responsibilities,
- its own validation criteria.
This enables arbitration — not power struggles.
6. Cognitive capture — the silent failure mode
Cognitive capture occurs when one space imposes its cognitive regime on another,
implicitly and without arbitration.
Capture by technology
- premature abstraction,
- reformulation to fit architecture,
- success measured by elegance.
→ Meaning lost, failure visible only in production.
Capture by business
- technology as prosthesis,
- Excel logic at scale,
- success measured by unchanged outputs.
→ Automation of inefficiency.
Capture is symmetrical.
It is structural, not political.
7. Guardrails — structural, not cultural
Effective frameworks prevent capture by design.
Core rules:
- no implicit fusion of spaces,
- no automation of unstabilized meaning,
- no abstraction without explicit loss,
- no execution without validated formalization.
Practical implications:
- distinct artefacts per space,
- validation at each boundary,
- limited veto rights:
- business on meaning,
- tech on feasibility,
- no veto across domains.
8. A non-negotiable principle
Any automation is a partial and reversible translation of a living system —
never its substitution.
Refactoring is not a decision of meaning.
It must remain structurally reversible.
Conclusion
Organizations do not fail because they lack alignment.
They fail because they lack cognitive architecture.
At the age of AI, performance depends on the deliberate composition and protection of heterogeneous cognitive spaces — not on making everyone think alike.
Bibliographic references (non exhaustive)
The Tacit Dimension — Michael Polanyi
Rationality in Organizations — Herbert A. Simon
The Adaptive Toolbox — Gerd Gigerenzer
Cognition in the Wild — Edwin Hutchins
