The race to secure AI dominance is on full steam with both software and hardware companies trying to up one another, as such, NVIDIA & OpenAI could be working on an AI model that can combine not thousands but millions of GPUs together.
So far, NVIDIA and OpenAI have collaborated on ChatGPT whose latest GPT-4 model utilizes several thousands of AI GPUs from the chip giant. It is reported that NVIDIA has supplied around 20,000 of its brand-new AI GPUs to OpenAI which will further be expanded in the coming months. But that is just the tip of the iceberg.
According to Wang Xiaochuan, the businessman & founder of the Chinese search engine Sogou, it is said that ChatGPT is already working on a more advanced AI computing model which utilizes far more advanced training methods. This model is said to hold the capacity to connect 10 million AI GPUs together.
Reaching even 100,000 GPUs sounds like a big deal but over a Million AI GPUs just sounds way too ambitious. But to add to his credibility, Wang himself has invested in a new intelligence firm known as Baichuan Intelligence for China which aims to be a prime competitor to OpenAI in the country. It should be pointed out that the company recently released its Baichuan-13B language model that is capable of running on consumer-level hardware such as an NVIDIA GeForce RTX 3090 GPU. But the possibility of OpenAI being a dominant force in the future is also raised & as such, he proposes on his Weibo account that China desperately needs an OpenAI of its own.
10 Million AI GPUs powering OpenAI's future language models is astronomical in terms of scale. Given the current capacity of NVIDIA, the company can only produce a Million AI GPUs so that is going to take 10 years to realize the true
Read more on wccftech.com