What Happened at the AI Summit in Delhi? The Warning That Shook the Room

Feb 20, 2026 - 17:00
 1
What Happened at the AI Summit in Delhi? The Warning That Shook the Room

The fourth global AI summit convened this week at Bharat Mandapam in New Delhi — the largest gathering yet, the first hosted in the Global South, and by several measures the most politically charged since the series began at Bletchley Park in 2023. Over five days, more than 20 heads of state, 60 ministers, and the chief executives of virtually every major AI company on earth descended on India’s capital to talk about the future of a technology that is simultaneously generating record profits and record anxiety.

Sundar Pichai was there. So was Sam Altman. Dario Amodei of Anthropic. Mukesh Ambani. Rishi Sunak. Emmanuel Macron shared a stage with Narendra Modi. António Guterres addressed the hall. Bill Gates was supposed to speak but pulled out hours before his keynote, the Gates Foundation citing a desire to keep the focus on the summit’s priorities — though the timing, amid renewed scrutiny of his ties to Jeffrey Epstein, was noted by everyone in attendance.

It was, by any measure, a spectacle. More than 250,000 visitors. Over 300 exhibitors across a 70,000-square-metre expo. Delhi hotel suites that normally run at $2,200 a night were listed at $33,000. The Supreme Court issued a circular allowing advocates to appear by video link because of anticipated traffic gridlock. And India set a Guinness World Record for the most pledges received for an AI responsibility campaign in 24 hours — 250,946 of them.

But spectacle is not the same as substance. And for all the diplomatic language, carefully worded voluntary commitments and carefully staged photo opportunities, the most important words spoken at the India AI Impact Summit may have come not from a head of state or a Silicon Valley chief executive, but from the 32-year-old founder of a French AI company that most people outside the industry have never heard of.

Mensch’s Warning

Arthur Mensch, co-founder and chief executive of Mistral AI, took the stage on Thursday and said what many in the room were thinking but few were willing to say out loud.

“We are at risk today,” he told delegates. “We are facing too much concentration of power in artificial intelligence. We don’t want to be in a world where three or four enormous companies actually own the deployment and the making of AI — actually own access to information.”

It is not, on the surface, a novel observation. The dominance of a small number of American firms in frontier AI — OpenAI, Google DeepMind, Anthropic, Meta — has been a recurring theme at every global AI gathering since Bletchley. But Mensch was making a more specific and more uncomfortable point: that despite three years of summits, declarations and voluntary commitments, the concentration has only deepened.

Mistral, valued at nearly €12 billion, is Europe’s leading independent AI model builder. It is also a fraction of the size of its American competitors. OpenAI was last reported to be valued at over $850 billion. The US-based cloud providers — AWS, Google, Microsoft — are building out most of the infrastructure needed to power and run AI models globally. The asymmetry is not shrinking. It is accelerating.

Mensch’s argument went beyond market share. He warned that concentrated ownership of AI creates excessive geopolitical leverage — that countries and institutions which rely on a handful of foreign providers for their AI infrastructure are ceding something more fundamental than a technology contract. They are ceding sovereignty. “Everyone who runs AI workloads must have access to the turn-on and turn-off button,” he said. “They must not be dependent on external providers who can turn off the button.”

He called for a different path: decentralised AI, built on open-source models, owned and operated by the countries and institutions that use it. It was, unmistakably, a pitch for Mistral’s own approach. But it was also a challenge to every government in the room — and to the American companies sitting in the front rows.

The Gap Between Words and Action

The AI summit series was born in November 2023 at Bletchley Park, where the UK convened an urgent conversation about AI safety following the explosive growth of ChatGPT. That gathering produced the Bletchley Declaration — a statement signed by 28 countries acknowledging the risks of frontier AI. Seoul in 2024 followed with further voluntary commitments. Paris in 2025 was billed as an “Action Summit” that would move from promises to outcomes.

New Delhi was supposed to go further still, shifting focus from safety and governance to real-world impact — particularly for the developing world. Modi’s keynote introduced the MANAV vision (the Hindi word for “human”), a five-pillar framework covering ethics, accountability, sovereignty, accessibility and legitimacy. Macron praised India’s digital infrastructure as something “no other country in the world has built.” Guterres called on tech companies to support a $3 billion global fund to make computing power more affordable and AI skills more accessible, warning that the technology’s future “cannot be decided by a handful of countries — or left to the whims of a few billionaires.”

But the structural critique published by TechPolicy.Press cut to the heart of the problem. The summit’s architecture, it argued, granted multinational corporations parity with sovereign governments — through the CEO Roundtable and the Leaders’ Plenary — while providing no equivalent platform for civil society, labour leaders, or human rights defenders. The people most likely to be affected by AI’s disruption of work, privacy and public services had the least voice in shaping its governance.

And the US delegation, according to the same analysis, arrived with an agenda centred not on cooperation but on dominance — framing AI as a geopolitical race against China rather than a shared challenge requiring collective governance.

The Ethics Problem Nobody Solved

Four summits in, the fundamental ethical questions around AI remain largely unresolved. Who is responsible when an AI system causes harm? How should the economic value generated by AI be distributed? What rights do workers have as their roles are automated? How do you govern a technology that is developing faster than any regulatory framework can keep pace with?

The voluntary commitments that emerged from Delhi — the “New Delhi Frontier AI Impact Commitments” — are, like their predecessors from Bletchley, Seoul and Paris, non-binding. They rely on the goodwill of companies whose primary obligation is to their shareholders and whose competitive incentive is to move as fast as possible.

OpenAI’s Altman told the summit that regulation is needed “urgently.” But urgently by whose timeline? Altman has also argued that overly tight regulation could hold the US back in the AI race — a tension that captures the central contradiction of every global AI gathering: the companies calling for governance are the same companies whose market position depends on moving faster than governance can follow.

The deeper ethical challenge is structural. Less than one per cent of ChatGPT usage comes from low-income countries. AI adoption is overwhelmingly concentrated in wealthy nations and within those nations, in wealthy firms. The promise that AI will democratise access to knowledge and capability is, for now, running well behind the reality that it is entrenching existing advantages.

India’s bet — that it can lead through deployment rather than development, using AI to improve public services, agriculture and healthcare for 1.4 billion people — is the most ambitious attempt to challenge that pattern. Whether it succeeds will depend on whether the technology can be adapted to local languages, local needs and local infrastructure at a scale that justifies the rhetoric.

What Mensch Got Right

Mensch’s speech was self-interested. Mistral benefits directly from a world that values open-source AI and digital sovereignty. But self-interest does not make an argument wrong.

The concentration of AI power in a handful of American companies is not a theoretical risk. It is the current reality. And three years of summits have not altered it. The voluntary commitments have not slowed the consolidation. The declarations have not redistributed the compute. The speeches about inclusion have not changed who controls the models, the data or the infrastructure.

What Delhi demonstrated, perhaps more clearly than any previous summit, is the gap between the conversation the world is having about AI and the decisions that are actually shaping its trajectory. The conversation is about ethics, inclusion and shared prosperity. The decisions are being made in boardrooms in San Francisco, driven by competitive pressure, investor expectations and the logic of scale.

Mensch’s warning was not new. That is precisely what made it so damning. We have heard it before — at Bletchley, at Seoul, at Paris. And still, nothing has changed.

The post What Happened at the AI Summit in Delhi? The Warning That Shook the Room appeared first on European Business & Finance Magazine.