Back

AI: Are we caught in a collective delusion?

There is something oddly euphoric about the present moment. Markets celebrate every technological breakthrough. Executives promise unprecedented productivity gains. Universities are overhauling their curricula at breakneck speed. Employees are learning to converse with machines that write, summarise, code, and decide.

And yet, behind this acceleration, an unsettling question emerges, almost indecent in its scepticism: are we still in control of what we are unleashing? When Yves Oltramare, former LODH partner and centenarian who lived through the Great Depression, the Cold War and globalisation, asks whether we are “dreaming or drifting into a kind of collective madness”, he is neither nostalgic nor technophobic. He is describing a vertigo. Not the vertigo of the machine, but that of speed itself.

 

The invisible rupture

Economic history is punctuated by major innovations. Steam, electricity, computing and AI have each reshaped production and power structures. Artificial intelligence appears to follow this lineage. But the comparison stops there.

The current revolution no longer concerns energy or mechanics. It concerns cognition. For the first time, systems automate tasks that once required judgement, analysis and, at times, creativity. The machine no longer replaces only the arm; it assists, and sometimes supplants, the mind.

The parallel with the atomic age becomes clear. Nuclear power revealed humanity’s capacity for self‑destruction. AI reveals its capacity for intellectual self‑delegation. The rupture is not merely technological; it is anthropological.

The question is not whether AI will boost productivity. It will. The question is: at what point do we cease exercising the faculties we hand over to increasingly powerful statistical systems?

Dependence does not arrive with spectacle. It advances gradually. Each accepted recommendation, each generated text, each algorithmically optimised decision appears rational. Yet together they produce a subtle shift: reflection becomes rarer, doubt contracts, long‑term thinking evaporates. Dissenting opinions are now treated as aberrations. Can a society accelerate indefinitely without eroding its capacity to understand?

 

Power, displaced….

Artificial intelligence is not an abstraction. It is developed, financed and controlled by a small number of private actors with colossal infrastructure and privileged access to global data. This concentration is not conspiratorial; it stems from extreme economies of scale and formidable technological barriers. Its consequences, however, are profoundly political.

Democratic systems rest on informed deliberation. Yet AI now intervenes in the production of information itself — in its sorting, its circulation and increasingly its creation. The boundary between assistance and influence grows porous.

In an era of heightened polarisation, illustrated in the United States by institutional tensions surrounding figures such as Donald Trump, the ability of democracies to regulate technologies evolving faster than the law appears uncertain. Legislation is slow; code is instantaneous.

The imbalance runs deeper still. Faced with economic, migratory or technological crises, states increasingly seem inclined to alter the architecture rather than repair the mechanism. As if, confronted with a flat tyre, a driver chose to replace the engine. The constitution, once a stable foundation, becomes a tactical instrument.

A democracy that continually adjusts its foundations to treat symptoms weakens its own regulatory capacity. The risk is not merely the impotence of law in the face of code. It is the erosion of the stability on which law depends.

This is not a dystopia in which machines govern states. The shift is subtler. When content production becomes automatable, when simulations steer strategic choices, when markets react to signals generated by other algorithms, the centre of gravity of power moves towards those who design and calibrate these systems.

The political question is no longer “who votes?” but “who trains the models?”

 

The new geopolitics of intelligence

Artificial intelligence is no longer an emerging technology; it has become an instrument of power. Unlike nuclear capability, which relied on visible infrastructure and formal treaties, AI spreads silently, asymmetrically, and is difficult to contain. It reshapes power relations not through threat, but through dependence.

The United States dominates this new geography through an ecosystem in which private capital, academic research, and federal support operate as a coherent whole. China follows a path built on centralisation and vertical integration. Europe seeks relevance through regulation, acutely aware of its industrial lag.

Behind these divergent strategies lies a deeper risk: cognitive fragmentation. Models trained on different values, information systems that cannot interoperate, competing narratives about truth, security and freedom. In such a landscape, the question is no longer who innovates fastest, but who defines the limits, the uses and the stories that will accompany this innovation.

 

The human variable

Two temptations dominate the public debate: technological enthusiasm and existential alarmism. Neither is sufficient.

The real issue lies elsewhere: in our collective willingness to delegate ever more of our judgment. AI can amplify our abilities, reduce drudgery, and accelerate scientific research. It can also standardise thought, smooth out divergence, and reduce the uncertainty that fuels critical reflection. The boundary depends less on code than on culture.

The question becomes less technological than civilisational. Do we preserve spaces where decisions remain human, slow, imperfect, and deliberate?

Collective madness does not necessarily announce itself through collapse. It may take the form of consensual acceleration, a generalised enthusiasm that renders caution suspect. AI has no intention of its own. It reflects the objectives we assign to it and the data we feed it. But if we stop questioning those objectives, if we progressively externalise our capacity for judgement, the rupture will not come from the machine. It will come from us.

 

The defining aberration of our time: Schools that no longer teach writing

Cognitive delegation is not theoretical. It is already visible in classrooms. From early schooling to universities, teachers report that students no longer draft their own first versions. They begin with an AI‑generated text.

The results are often polished. But the underlying learning erodes. Students no longer develop the ability to structure thought over time. They become dependent on a tool that offers the most probable, the smoothest, the least risky answer.

The phenomenon extends beyond education. In medicine, justice and finance, delegation is rational, sometimes indispensable, yet it gradually diminishes professional intuition, the ability to detect exceptions, anomalies, the detail that escapes statistics.

The anthropological question is therefore not “does the machine think?” but “what becomes of a society that no longer exercises certain forms of thinking?”

 

Conclusion: The choice that remains human

The question is not whether AI will surpass the human mind. It will, in certain tasks. The question is how far we are willing to let it displace our judgement.

Each generation of automated systems transforms not only what we do, but who we are. When education abandons critical thinking, when institutions adapt to the short term, when public opinion aligns with what algorithms deem probable, the risk is no longer technical. It is existential.

Collective madness is not a sudden collapse. It takes the form of a consensual, seductive acceleration. It begins the day we entrust our reason to lines of code and accept that human reflection becomes the exception rather than the norm.

In a world moving ever faster, the true act of resistance is the preservation of autonomous thought. And it is this choice, still human, that will determine whether we master our tools, or allow them to define us.