NVIDIA has unveiled its latest Blackwell GB200 NVL4 solution with four GPUs & two GPUs packed into one powerful HPC & AI solution.
As part of its SC24 announcements, NVIDIA is unveiling two brand-new hardware platforms, one based on its existing Hopper stack and the other powered by its latest Blackwell stack. These two solutions are designed for enterprise servers, powering accelerated HPC and AI workloads.
Starting with the NVIDIA H200 NVL, the company is now confirming the general availability of these PCIe-based Hopper cards which can connect up to 4 GPUs through an NVLINK domain, offering seven times faster bandwidth than a standard PCIe solution. The company states that the H200 NVL solutions can fit into any data center and offer a range of flexible server configurations that are optimized for hybrid HPC and AI workloads.
In terms of specifications, the Hopper H200 NVL solution offers 1.5x more HBM memory, 1.7x LLM inference performance, and 1.3x HPC performance. You are getting 114 SMs with a total of 14,592 CUDA cores, 456 tensor cores, and up to 3 TFLOPs of FP8 (FP16 accumulated) performance. The GPU features 80 Gb HBM2e memory configured across a 5120-bit interface and has a TDP of 350 Watts.
Moving over to Blackwell, we have our first showcase of the NVIDIA GB200 NVL4, which is a brand-new module that is sort of a bigger extension of the original GB200 Grace Blackwell Superchip AI solution. The GB200 NVL4 module doubles the CPU and GPU capabilities and also adds increased memory.
You are getting two Blackwell GB200 GPUs configured on a larger board with two Grace CPUs. The module is designed as a single-server solution with a 4-GPU NVLINK domain and 1.3T of Coherent memory. In terms of performance, the module will offer a 2.2x improvement in
Read more on wccftech.com