X Chairman, Elon Musk, announces the commencement of GROK 3 training at Memphis using the current-gen NVIDIA H100 GPUs.
The popular venture 'xAI' from the company's chairman has officially begun training on NVIDIA's most powerful data center H100 GPUs. Elon Musk proudly announced this on X, calling it 'the most powerful AI training cluster in the world!'. In the post, he said that the supercluster will be trained by 100,000 liquid-cooled H100 GPUs on a single RDMA fabric and congratulated xAI, X, and team Nvidia for starting the training at Memphis.
The training started at 4:20 am Memphis local time and according to another follow-up post, Elon claims that the world's most powerful AI will be ready by December this year. As per the reports, GROK 2 will be ready for release next month and GROK 3 by December. This came around two weeks after xAI and Oracle canceled their $10 billion server deal.
xAI was renting Nvidia's AI chips from Oracle but decided to build its own server, ending the existing deal with Oracle, which was supposed to continue for a few years. The project is now aimed at building its own supercomputer superior to Oracle and this is going to be achieved by using a hundred thousand high-performance H100 GPUs. Each H100 GPU costs roughly $30,000 and while GROK 2 did use 20,000 of them, GROK 3 requires five times the power to develop its AI chatbot.
This decision comes as a surprise since Nvidia is about to ship its newer H200 GPUs in Q3. H200 was in mass production in Q2 and uses the advanced Hopper architecture, providing better memory configuration, resulting in up to 45% better response time for generative AI outputs. Following the H200, it's not far from now when Nvidia is about to launch its Blackwell-based B100 and B200 GPUs right at the end
Read more on wccftech.com