
xAI and Anthropic announced a partnership on Wednesday that will give Anthropic access to the full compute capacity of xAI’s Colossus 1 data center, a facility estimated at roughly 300 megawatts of computing power.
The agreement immediately allowed Anthropic to raise usage limits for its AI systems and represents one of the largest compute infrastructure arrangements disclosed between major AI companies. Financial terms were not announced, though the deal is reportedly worth billions of dollars.
The partnership also marks a shift in xAI’s business positioning. Rather than acting primarily as a consumer of compute resources for its own AI models, the company is now monetizing its infrastructure directly by supplying computing capacity to another model developer.
Elon Musk said on X that xAI had already shifted training operations to a newer facility known as Colossus 2 and no longer required both data centers for internal use.
xAI Expands Beyond Grok
In the near term, the deal provides additional revenue as xAI continues expanding its infrastructure operations.
xAI’s primary consumer-facing AI product remains Grok, which reportedly experienced declining usage following controversies tied to its image generation features earlier this year.
Selling unused compute capacity to Anthropic provides a new revenue stream while xAI and SpaceX continue preparing for a potential future public offering.
The agreement also supports xAI’s broader infrastructure ambitions, including plans connected to orbital data center systems that the company has previously discussed publicly.
Compute Capacity Becomes Strategic Asset
The arrangement differs from how several other major technology companies currently manage AI infrastructure.
Companies such as Google and Meta Platforms have continued reserving significant compute capacity for their own internal AI development efforts rather than renting large portions of it to external customers.
Last month, Sundar Pichai said during an earnings call that Google Cloud revenue had been limited by compute shortages because the company prioritized internal AI product development over external GPU rentals.
Meta has also expanded its infrastructure footprint aggressively. In January, Mark Zuckerberg introduced the company’s Meta Compute initiative, describing infrastructure investment as a long-term strategic advantage tied to future AI systems.
The comparison highlights a different direction for xAI, which increasingly resembles a cloud infrastructure provider that purchases hardware, primarily from Nvidia, and rents computing resources to AI developers.
Infrastructure Focus Raises Questions About Software Ambitions
The infrastructure-focused strategy places xAI closer to companies operating in the so-called neocloud market, where providers specialize in supplying GPU resources for AI training and inference workloads.
The economics of that business remain challenging because providers face pressure from hardware suppliers as well as changing demand cycles among AI companies.
xAI was reportedly valued at $230 billion during its January funding round. By comparison, cloud infrastructure provider CoreWeave, which manages a similar scale of compute resources, currently carries a market value below one-third of that figure.
Musk has outlined broader ambitions for xAI’s infrastructure operations, including orbital data centers targeted for deployment by 2035 and plans for in-house chip manufacturing through a facility known as Terafab.
Earlier internal presentations at xAI also highlighted software-related goals involving coding tools, partnerships with coding startup Cursor, and projects tied to computer-use systems and digital twin technology.
Those initiatives would require large and sustained access to computing power. The Anthropic agreement, however, indicates that xAI is currently prioritizing monetization of infrastructure capacity alongside its own AI development efforts.
Featured image credits: Whitepapers Online
For more stories like it, click the +Follow button at the top of this page to follow us.
