AMD's Instinct MI300X GPU is going to be the most power-hungry chip when it launches, consuming close to 1KW of power.
The NVIDIA H100 GPU has been the most power-hungry data center chip ever since it launched, with a rated power consumption of up to 700W. But that changes with the Instinct MI300X which will consume even more power.
In the footnotes for the AMD Instinct MI300X presentation, the OAM GPU accelerator is said to consume 750W of power. For comparison, the Instinct MI250X GPUs based on the CDNA 2 GPU architecture consumed anywhere from 500 to 560 Watts of power. So the power requirements have gone up by 34-50% in a single generation. At the same time, this power increase was to be expected considering the design of the chip itself and the performance it has on offer. The chip delivers an 8x performance boost in AI workloads while being 5 times more efficient.
The AMD Instinct MI300X GPU is expected to feature 304 CDNA 3 compute units versus the 220 Compute Units featured on the MI250X. That's an increase of 38%. Furthermore, the chip packs HBM3 memory in 192 GB capacities. That's 50% more than the 128 GB HBM2e memory used by the MI250X and since we are talking HBM3, we are also going to see much faster transfer speeds.
A big advantage of the Instinct MI300X GPU and its massively huge 192 GB VRAM is that it can run large language models on fewer GPUs than the competition. The Instinct MI300X can run up to 540 Billion parameters LLMs with a smaller amount of GPUs than the competition's 80 GB H100 chips.
These power requirements are continued to go up as GPUs and HPC accelerators become more & more powerful. A recent Gigabyte Server roadmap showcases how CPUs, GPUs & APUs are getting close to the 1000W power barrier.
Read more on wccftech.com