Why Is Anthropic Growing So Fast? The Infrastructure War Behind Claude's Rise

Large-scale data center facility viewed from above at sunset, representing Anthropic's growing AI compute infrastructure

Quick Answer: Anthropic is growing because Claude Code and enterprise demand created a compute bottleneck the company is now fixing through simultaneous deals with SpaceX, Amazon, Google, Microsoft, and others. The model was never the problem. Infrastructure was. A near-$1 trillion valuation reflects investor belief that Anthropic can become the AI layer underneath serious business work — not just another chatbot.

220,000 NVIDIA GPUs. That is what the SpaceX deal brings Anthropic — not eventually, but within the month of signing. No multi-year construction timeline. No waiting on permits. Just compute, available now, at a scale that immediately changed what Claude users actually experience.

That single detail explains more about Anthropic's growth than any benchmark comparison. But it raises an obvious question: why did Anthropic need that compute so badly, and why did Elon Musk — of all people — end up providing it?

The Real Problem Was Never the Model

Claude Opus was already one of the strongest models available for serious work. Claude Code had become one of the most talked-about developer tools in AI. Enterprise interest was real. Demand was real.

And yet users kept running into walls. Claude Code's 5-hour rate limits were a constant complaint. Peak hour slowdowns made the experience feel inconsistent. API users wanted more Opus access. Developers building actual products on Claude needed predictability — not a paid subscription that still throttled them at inconvenient times.

The bottleneck was infrastructure, not intelligence. Anthropic had built something people genuinely wanted to use. It just did not have enough runway for them to actually use it.

That is what the SpaceX deal immediately addressed. After the announcement, Claude Code's 5-hour limits were doubled for Pro, Max, Team, and enterprise plans. Peak hour reductions for Claude Code were removed entirely. API rate limits for Claude Opus jumped — some tiers moving from hundreds of thousands of input tokens per minute to millions. The improvements were not incremental. They were a direct signal that the constraint had been compute all along.

The SpaceX Deal — Why Elon Said Yes

For months before the partnership, Elon Musk had been openly critical of Anthropic. He mocked Claude, criticized the company's culture, and positioned XAI's Grok as the sensible alternative to what he called AI safety theater. Then Anthropic announced it would use all of the compute capacity at SpaceX's Colossus 1 data center — over 300 megawatts and more than 220,000 NVIDIA GPUs.

It looks ridiculous on the surface. It makes sense when you look at the incentives.

Elon's real fight is with OpenAI, not Anthropic. The conflict is personal, legal, and public — centered on OpenAI's shift away from its original nonprofit structure. XAI exists partly as a counter to OpenAI's dominance. In that context, helping one of OpenAI's most credible competitors gain massive compute capacity is not a contradiction. It fits the strategy.

There is also a simple business reason. Compute clusters this size are extraordinarily expensive to build and maintain. Letting them sit underused makes no sense. Anthropic had urgent demand. SpaceX had available capacity. The deal happened.

After the partnership was announced, Elon publicly said he was impressed after meeting senior Anthropic people — that they seemed highly competent and no one triggered his "evil detector." That is a sharp reversal. But in AI right now, available GPUs matter more than old insults.

Every Major Compute Deal, Explained

The SpaceX deal is the one that got headlines, but it is one piece of a much larger pattern. Anthropic is simultaneously locked into compute agreements with essentially every major infrastructure player in AI.

Partner Deal Timeline
SpaceX (Colossus 1) 220,000+ NVIDIA GPUs, 300 MW capacity Available now (2025)
Amazon Up to 5 GW agreement, nearly 1 GW new capacity End of 2026
Google / Broadcom 5 GW agreement (TPUs + custom chips) Starting 2027
Microsoft / Nvidia $30 billion Azure capacity partnership Active
Fluid Stack $50 billion AI infrastructure investment Active
Google Cloud Reported $200 billion spend commitment over 5 years Ongoing

The Google relationship is the strangest one to think about clearly. Alphabet is reportedly investing up to $40 billion into Anthropic while Anthropic is committed to spending $200 billion with Google Cloud — a figure reportedly representing more than 40% of Google's disclosed revenue backlog. Google is simultaneously a major investor, a key infrastructure vendor, and a direct competitor through Gemini. Clean rivalries do not really exist in AI anymore. Compute is too important for anyone to hold firm competitive lines.

This is also why Kazakhstan's National Investment Corporation took a direct equity position in Anthropic through the Series E. The dollar amount was small relative to the cloud deals. But a sovereign wealth fund taking a stake in a frontier AI lab signals that these companies are starting to look like strategic national assets, not just startups chasing market share.

The Pentagon Fight — Where Safety Principles Hit a Wall

The Trump administration signed AI agreements with eight major tech companies for military and government use: SpaceX, OpenAI, Google, Microsoft, Nvidia, Amazon Web Services, Oracle, and Reflection. Anthropic was not on the list.

According to reporting, Anthropic refused to accept terms that would allow Claude to be used for all lawful purposes — a category that reportedly included autonomous weapons and mass surveillance applications. The response was severe. The administration labeled Anthropic a supply chain risk, a designation normally reserved for companies tied to foreign adversaries.

Anthropic sued. A federal judge blocked the government's effort, at least temporarily. The White House then reportedly reopened discussions after Anthropic announced major technical breakthroughs. The situation remains unresolved.

This is the sharpest version of the tension Anthropic now has to manage. It wants enterprise relevance and government contracts. It also wants safety limits that are actually enforced. Once frontier AI becomes part of national security infrastructure, those two things will keep colliding. The Pentagon fight was the first clear public example of what that collision looks like in practice.

Claude Mythos — The Model Anthropic Won't Release

The most unusual part of Anthropic's current position is a model most people have not heard of. Claude Mythos preview is reportedly so capable at finding software vulnerabilities that Anthropic decided not to release it publicly at all. Instead, access is limited to selected organizations — specifically for scanning and fixing their own software.

Mozilla reportedly used Mythos to find 271 vulnerabilities in Firefox, which were then patched. On the defensive side, that is a genuine public benefit. Security teams being able to find and fix vulnerabilities faster than attackers can exploit them is the kind of AI application that actually makes things safer.

The concerning part is that the same capability is not unique to Anthropic. Reports suggest other models, including GPT-5.5 and smaller systems, have shown comparable abilities in some evaluations. Which means the question is not really about one withheld Anthropic model. It is about what happens as this capability spreads across the ecosystem — to criminal organizations, ransomware groups, anyone with API access and a target.

The pattern may also extend beyond software. Any complex rule system filled with exceptions, edge cases, and loopholes — tax codes, financial regulations, legal frameworks — becomes something AI can analyze at a scale and speed humans cannot match. Software vulnerabilities are the current headline. The broader implication is harder to see clearly from here.

You can read more about what Anthropic has been finding inside Claude's internal processes in what Anthropic found inside Claude's thinking — the interpretability research running parallel to all of this infrastructure expansion.

My Take

What is actually happening here is that Anthropic is being forced to play a different game than the one it originally signed up for. The company was founded to be the careful alternative — slower, more methodical, safety-first. The market decided that was also valuable, and now the money and expectations attached to that reputation are massive.

The $900 billion valuation question is really a question about whether Claude Code and enterprise AI become the default infrastructure for serious knowledge work. If they do, the compute investments make sense. If the market fragments or OpenAI extends its lead, the same investments look like overreach.

The Pentagon fight is the tell. A company can say it has safety principles. It is a different thing entirely to take a "supply chain risk" designation from the US military, file a lawsuit, and hold the position. That is either genuine conviction or the most expensive brand play in tech history. Probably some of both.

Frequently Asked Questions

What is Anthropic's current valuation in 2026?

Anthropic is reportedly looking to raise up to $50 billion at a pre-money valuation of around $900 billion. After the financing closes, the post-money valuation could approach $1 trillion — which would surpass OpenAI's reported $852 billion valuation and make Anthropic the most valuable AI startup in the world.

Why did Anthropic partner with SpaceX even though XAI competes with Claude?

Because the incentives lined up despite the apparent contradiction. Anthropic needed compute urgently. SpaceX had Colossus 1 capacity available. Elon Musk's primary adversary is OpenAI, not Anthropic — helping a strong OpenAI competitor gain infrastructure serves his broader positioning. Business logic and strategic alignment both pointed the same direction, regardless of the prior public criticism.

What is Claude Mythos?

Claude Mythos is Anthropic's unreleased model designed for advanced security research. It is reportedly highly capable at identifying software vulnerabilities. Rather than releasing it publicly, Anthropic has restricted access to selected organizations who use it defensively — to find and fix vulnerabilities in their own code before attackers can exploit them. Mozilla reportedly used it to identify 271 Firefox vulnerabilities.

Why was Anthropic excluded from Pentagon AI contracts?

Anthropic reportedly refused to accept terms allowing Claude to be used for all lawful purposes, specifically objecting to applications involving autonomous weapons and mass surveillance. The Trump administration responded by labeling Anthropic a supply chain risk. Anthropic sued, and a federal judge temporarily blocked the designation. Discussions reportedly reopened afterward, but the underlying tension between Anthropic's safety limits and government demands remains unresolved.

How does Anthropic's revenue growth compare to OpenAI?

Anthropic's annualized revenue is reported to be approaching $45 billion, compared to roughly $9 billion at the end of 2024. OpenAI's reported revenue is around $852 billion in valuation terms, though direct revenue comparisons vary by source. The key driver for Anthropic is Claude Code for developers and Cowork for enterprise non-technical users — both of which move Claude beyond chatbot territory into embedded business infrastructure.

The compute deals will keep expanding. The valuation will keep climbing or correcting. What matters most right now is whether users actually feel the difference — fewer limits, more reliable access, better API performance. That is the gap between a company that raised a lot of money and one that deserved to.

All figures in this article are sourced from publicly available reports, including Reuters, The Verge, and Anthropic's own announcements. Valuation and revenue figures are based on reporting current as of May 2026 and may change.

About Vinod Pandey

Vinod Pandey researches and analyzes developments in AI — covering models, infrastructure, and the business forces shaping the industry. Every article on Revolution in AI is built on publicly verifiable data, not speculation.

Questions? Get in touch

Post a Comment

0 Comments