Nvidia has been dominating the market for chips capable of training generative AI programs, but AMD is now trying to claim its share of the pie through a new enterprise-grade GPU.
The company today announced the AMD Instinct MI300X, a so-called “accelerator” chip designed to train large language models that can power programs such as OpenAI’s ChatGPT.
“AI is really the defining technology that’s shaping the next generation of computing, and frankly it’s AMD’s largest and most strategic long-term growth opportunity,” said AMD CEO Lisa Su during the product’s unveiling(Opens in a new window).
The MI300X tries to beat the competition by featuring up to “an industry-leading” 192GB of HMB3 memory while being built on AMD’s data center-focused CDNA 3(Opens in a new window) architecture, which is meant for AI-based workloads. Customers will be able to pack eight MI300X accelerators into a single system, enabling the GPUs to train larger AI models over the competition.
“For the largest models, it actually reduces the number of GPUs you need, significantly speeding up the performance, especially for inference, as well as reducing total costs of ownership,” Su said.
The MI300X is also based on AMD’s other AI-focused chip, the MI300A, which is slated to arrive in supercomputers. The difference is that the company swapped out the Zen 4 CPU chiplets in the MI300A, turning the MI300X into a pure GPU processor.
“You might see it looks very, very similar to MI300A, cause basically we took three chiplets off and put two (GPU) chiplets on, and we stacked more HBM3 memory,” Su added. “We truly designed this product for generative AI.”
In a demo, Su also showed a single MI300X equipped with 192GB of memory running the open-source
Read more on pcmag.com