1.4 trillion was spent building AI models. Apple just spent $1 billion picking the winner.

Mar 5, 2026 - 23:00
 0
1.4 trillion was spent building AI models. Apple just spent $1 billion picking the winner.

The industry spent $1.4 trillion building AI models. Apple spent $1 billion picking the winner. With Gemini-powered Siri about to land on 2.5 billion devices, the company that looked late may have been playing the only game that mattered.

For two years, the consensus was clear: Apple had missed the AI moment. While OpenAI, Google, Meta and Anthropic poured hundreds of billions into training frontier models, Cupertino stayed quiet. No chatbot launch. No public model benchmark wars. No breathless product demos. Wall Street worried. Analysts questioned the strategy. Competitors assumed Apple had simply fallen behind.

Then, in January 2026, Apple made a very Apple move. It partnered with Google in a multi-year deal to power the next generation of Siri and Apple Foundation Models with Gemini AI. Not build the biggest model. Pick the best one. Integrate it everywhere. And maintain total control over privacy and user experience.

The upgraded Siri lands this month with iOS 26.4, rolling out across an installed base of roughly 2.5 billion active devices. While every other AI company is still fighting for distribution, Apple already owns the hardware those users live on.

The $1 billion shortcut

The financial contrast is striking. The AI industry has collectively spent more than $1.4 trillion on model training since 2023. Apple is reportedly paying Google around $1 billion per year for access to Gemini — a fraction of what it would cost to build a competitive foundation model from scratch.

The deal is structured as a white-label arrangement. Gemini powers the backend, but users will never see Google’s branding. From a consumer perspective, this is still Siri — just dramatically smarter. The custom Gemini system handles Siri’s summariser and planner functions while running on Apple’s Private Cloud Compute infrastructure, maintaining the privacy standards that Apple has turned into a competitive moat against rivals who monetise user data.

Apple is also continuing to develop its own models internally — the next-generation project is reportedly codenamed Ferret-3, targeted for 2026-2027. The Google partnership buys time while those efforts mature. It is a bridge strategy, not a surrender.

Distribution beats invention

The deeper insight is not about the deal itself. It is about what the deal reveals about where value actually accrues in AI.

The industry has spent two years operating on the assumption that whoever builds the most powerful model wins. Billions have been raised, data centres built, and talent wars fought on that premise. But Apple’s move suggests a different thesis entirely: the model is a commodity. Distribution is the moat.

Consider the numbers. OpenAI, Google and Anthropic are all competing for paying subscribers at $20 to $200 per month, fighting for each user one by one. Apple flips a software switch and reaches 2.5 billion devices overnight. No acquisition cost. No onboarding friction. No app to download.

Startups are racing to build dedicated AI hardware — pins, pendants, glasses, standalone devices. Apple already ships the iPhone, iPad, Mac, Apple Watch and Vision Pro, all running on its own silicon designed for on-device AI inference. The hardware distribution problem that every AI startup is trying to solve is one that Apple solved a decade ago.

The ecosystem play

Apple’s approach also sidesteps the margin pressure that is building across the AI industry. Running inference in the cloud is expensive. Every ChatGPT query, every Gemini response, every Claude conversation costs real money in compute. The companies providing those services are either subsidising usage to grow market share or charging subscription fees that limit adoption.

Apple’s model runs differently. By processing as much as possible on-device through its own neural engines, it avoids the per-query cost structure that makes AI services inherently unprofitable at scale. The cloud component — handled through Private Cloud Compute with Gemini integration — is reserved for the tasks that genuinely require it. The result is AI that is faster, cheaper to operate and more private than cloud-first competitors.

This is the ecosystem advantage that no amount of model training can replicate. Apple controls the chip, the device, the operating system, the app store, the payment layer and now the AI integration. Every layer reinforces the others.

What the market is missing

Tim Cook framed the strategy on Apple’s latest earnings call not as a revenue line but as a platform feature. Apple Intelligence is integrated across the operating system rather than sold as a standalone product. The monetisation comes indirectly — through device upgrades, services revenue and the stickiness that keeps users locked into the Apple ecosystem for years.

This is why the “Apple lost the AI race” narrative was always missing the point. Apple was never racing to build the smartest model. It was waiting to see who did, then integrating the winner into a distribution network that nobody else can match.

The AI era may not be decided by who builds the best model. It may be decided by who owns the device that model runs on. And in that game, Apple does not just have a head start — it has a monopoly on the playing field.

Being late to AI might have been the most Apple strategy imaginable. The question now is whether the rest of the industry realises too late that they were competing in the wrong race.


FAQ

Why did Apple choose Google Gemini over building its own AI model? Apple evaluated multiple AI providers including OpenAI and Anthropic before selecting Google’s Gemini as the foundation for Apple Intelligence and the new Siri. The multi-year deal, reportedly worth around $1 billion per year, gives Apple access to frontier AI capabilities without the tens of billions required to train a competitive model independently. Apple is continuing to develop its own models internally under the codename Ferret-3, with the Google partnership serving as a bridge strategy while those efforts mature.

When does the new AI-powered Siri launch? The Gemini-powered Siri is expected to roll out with iOS 26.4 in March 2026. It will be available on iPhone 15 Pro and newer devices. The upgrade enables more personalised responses, contextual understanding and the ability to access on-screen content and personal data to complete tasks. Apple’s Private Cloud Compute handles the cloud processing while maintaining its privacy standards, with no Google branding visible to end users.

How does Apple’s AI strategy differ from OpenAI and Google? While OpenAI and Google compete to build the most powerful AI models and sell access through subscriptions, Apple’s strategy focuses on distribution and integration. Rather than charging users directly for AI, Apple embeds it across its operating system and hardware ecosystem, reaching 2.5 billion devices without per-user acquisition costs. By running AI on-device through its own chips wherever possible, Apple also avoids the high cloud inference costs that make many AI services unprofitable at scale, creating a fundamentally different business model.

The post 1.4 trillion was spent building AI models. Apple just spent $1 billion picking the winner. appeared first on European Business & Finance Magazine.