Elon Musk’s xAI is charging forward with a supercomputer expansion in Memphis, Tennessee, adding 100,000 Nvidia GPUs to the facility’s existing 100,000, doubling its power in one of the most ambitious AI infrastructure projects yet.
Targeting the cutting-edge of artificial intelligence, xAI’s planned 200,000-GPU setup will not only position it among the largest AI supercomputers in the world but is also part of Musk’s longer-term goal to scale up to 300,000 GPUs by summer 2025.
xAI’s “Colossus” supercomputer is being assembled at record speed, housed within a vast 785,000-square-foot structure filled with Nvidia’s H100 and H200 GPUs. Musk emphasized the facility’s rapid progress, having been built and activated within 122 days—a striking feat in the world of large-scale supercomputing, where infrastructure often takes years to finalize.
The costs for this expansion are substantial. With each Nvidia H100 GPU priced near $30,000, initial costs are estimated at $3 billion. Upgrading to H200 units—equipped with more memory and an estimated $40,000 price tag per unit—will only increase the investment. Still, Musk is committed to supporting xAI’s advancement, focused on developing AI technologies such as Grok, the company’s conversational AI.
Tech enthusiasts were offered a sneak peek into the facility on Monday when ServeTheHome released footage showing row upon row of Nvidia-powered server racks. Musk’s efforts mirror a broader trend among tech giants, as Meta, OpenAI, and Microsoft are also building up GPU resources to tackle the increasing demands of advanced AI.
Featured Image courtesy of Nvidia
Follow us for more tech news updates.