Microsoft and OpenAI, plans have been unveiled to construct an unparalleled supercomputer named “Stargate,” signaling a momentous leap forward in artificial intelligence (AI) technology. This ambitious project, estimated to cost a staggering $100 billion, reflects the escalating demand for advanced data centers capable of supporting cutting-edge AI applications, heralding a new era in AI development.
“We are always planning for the next generation of infrastructure innovations needed to continue pushing the frontier of AI capability,” a Microsoft representative told Reuters.
Sources close to the project, as disclosed to The Information, indicate that Microsoft is poised to take the lead in financing this monumental endeavor, far surpassing existing data center costs by over 100-fold. Scheduled for launch in 2028, Stargate is set to spearhead a series of supercomputers envisioned by Microsoft and OpenAI over the next six years, forming part of a comprehensive strategy aimed at unlocking the full potential of AI.
The Information’s report are rumors from OpenAI insider “Jimmy Apples,” who recently wrote on X that OpenAI CEO Sam Altman is “building a big special something” “somewhere in the desert.” Energy companies Helion Energy and American Clean Power are allegedly involved. (Update: Jimmy Apples confirmed he was referring to Stargate
via decoder
Insider reports detailed by The Information reveal that Microsoft and OpenAI are currently advancing through the third phase of their strategic plan, with Stargate marking the apex of this multi-phased approach. The project represents the fifth and final stage, not only showcasing remarkable hardware construction but also substantial investment in securing vital AI chips essential for the supercomputer’s operation.
ALSO READ: China has Developed the World’s Most Energy-Efficient AI microchips
According to interestingEngineering,” They expect to achieve this feat by 2026, and the supercomputer might be used to power OpenAI’s voice recognition or text-to-video AI tools.
The burgeoning demand for generative AI has catalyzed a surge in specialized data center requirements, posing challenges for AI development, particularly in securing the graphics processing units (GPUs) crucial for AI model training. The scarcity and exorbitant costs of these chips have prompted companies to explore alternative solutions.
“By virtue of generative AI workloads requiring much more compute, and more widely affecting energy efficiency and cooling in the data center, undoubtedly cloud costs will continue to rise as more companies adopt generative AI,” Tracy Woo, an IT research analyst told WSJ.
Leading AI chipmaker Nvidia has also faced challenges in meeting the soaring demand. Its latest chip, the Blackwell B200, commands a hefty price tag ranging from $30,000 to $40,000, as confirmed by CEO Jensen Huang in a recent CNBC interview. Despite substantial investments in research and development, Nvidia’s market dominance has raised concerns regarding monopolistic practices and affordability.
ALSO READ: Microsoft Unveils GPT-4-Turbo in Copilot, Offering Enhanced AI Experience for Free
In response to these challenges, industry heavyweights including Intel, Qualcomm, Google Cloud, Arm, and Samsung have united to establish the “United Acceleration Foundation.” This collaborative initiative aims to create an open standard accelerator programming model to challenge Nvidia’s software and hardware supremacy in the AI sector. By fostering cooperation and innovation, the foundation seeks to democratize AI technology access and foster healthy competition within the market.