NVIDIA has just announced its boosted GH200 GPU which now comes equipped with HBM3e, the world's fastest memory solution.
According to NVIDIA, the Hopper GH200 GPU is now the world's first HBM3e chip which offers not just higher memory bandwidth but also higher memory capacity. A dual Grace Hopper system now offers 3.5x more capacity and 3 times higher bandwidth than the existing offering. The systems can now offer up to 282 GB of HBM3e memory.
HBM3e memory itself offers a 50% faster speed up over the existing HBM3 standard, delivering up to 10 TB/s bandwidth per system and 5 TB/s bandwidth per chip. HBM3e memory will now be used to power a range of GH200-powered systems (400 and counting) which include a diverse variety of combinations of NVIDIA’s latest CPU, GPU, and DPU architectures including Grace, Hopper, Ada Lovelace, and Bluefield to meet the surging demand within the AI segment.
NVIDIA hasn't particularly announced who they will be sourcing the brand new HBM3e memory dies from for use on its GH200 AI GPU but SK Hynix was recently reported to have received the request from NVIDIA to sample its next-gen HBM3e DRAM. Meanwhile, Samsung also has faster HBM3 dies which can offer up to 5 TB/s bandwidth per stack though it seems like SK Hynix might be the choice for GH200 GPUs.
NVIDIA today announced the next-generation NVIDIA GH200 Grace Hopper platform — based on a new Grace Hopper Superchip with the world’s first HBM3e processor — built for the era of accelerated computing and generative AI.
Created to handle the world’s most complex generative AI workloads, spanning large language models, recommender systems and vector databases, the new platform will be available in a wide range of configurations.
The dual configuration
Read more on wccftech.com