This is not investment advice. The author has no position in any of the stocks mentioned. Wccftech.com has a disclosure and ethics policy.
With the global AI industry doing all that it can to gain access to precious chips from NVIDIA and outpace Microsoft backed OpenAI in the race to develop the most advanced artificial intelligence model, Elon Musk has shared fresh details about his plans to build a GPU cluster to train xAI's Grok AI model. According to Musk, xAI has decided to rely only on itself to build "the most powerful training cluster in the world" after parting ways with Oracle to speed up progress on AI development. Oracle has provided 24,000 NVIDIA Hopper GPUs to xAI for training the Grok 2 AI model, which Musk shares will be ready to release in August.
Musk shared the latest details for xAI's 100,000 GPU cluster in response to a media report that outlined that talks between the AI firm and Oracle to expand their existing agreement have ended. Under the current deal, xAI was using 24,000 of NVIDIA's H100 GPUs to train the Grok 2 AI model, and by the looks of it, the firm was interested in expanding cooperation for Musk's 100,000 GPU system. Oracle, according to the media report, is also working with Microsoft to supply it with a cluster of 100,000 NVIDIA Blackwell GB200 chips, which are the latest AI processors on the market.
Musk shared that xAI is building its 100,000 GPU AI system internally to achieve the "fastest time to completion." He believes this is necessary to "catch up" with other AI companies, as, according to him, "being faster than any other AI company" is very important for xAI's "fundamental competitiveness."
Today's details follow Musk's statements early last month which revealed xAI's plans to build a multi billion
Read more on wccftech.com