Alphabet’s Google will provide Anthropic with up to one million of its custom-built AI chips in a groundbreaking agreement that could significantly change the competitive landscape of AI infrastructure. The multibillion-dollar deal enormously deepens Anthropic’s compute capacity and, at the same time, makes its relationship with one of its biggest investors and technology partners even more cozier.
The device are Tensor Processing Units (TPUs), that is, Google’s own cutting-edge semiconductors optimized to rapidly process extremely large machine learning models. Set for deployment in 2026, the rollout will bring more than a gigawatt of computing power online—a massive expansion aimed at meeting the growing demand for resources to train and operate large AI models.
In a joint statement, Anthropic’s chief financial officer Krishna Rao described the expansion as a natural evolution of their long-standing partnership. “This latest collaboration allows us to scale the computing power we need to continue advancing the frontier of AI,” Rao said.
For Google, the deal strengthens its position as both a leading investor and key infrastructure provider in the AI race. It is also a loud and clear bet in the future of Google’s specialized hardware ecosystem, especially as a company like Anthropic is looking for a way out from Nvidia’s highly demanded GPUs.
Google has already poured roughly US$3 billion into Anthropic over the past two years, complementing Amazon’s separate US$8 billion commitment. Both tech giants provide cloud infrastructure to the AI startup, which relies on their platforms to train and operate its Claude family of models—seen as rivals to OpenAI’s GPT series and Google’s own Gemini systems.
The agreement also underscores the enormous capital demands fueling the current AI boom. Anthropic, recently valued at US$183 billion following a US$13 billion funding round, continues to seek investment partners to support its rapid expansion.
While financial details of how Anthropic will pay for the TPU access remain undisclosed, the scale of the partnership highlights the escalating costs—and strategic dependencies—driving the AI industry’s infrastructure arms race.
