Nvidia has announced its new Grace Hopper superchip, and its massive memory is enough to make the entire RTX 40-series weep. Further than that, it's packing eight of them into what Jensen Huang calls «one giant GPU» with the DGX GH200—an absolutely mega superserver to accelerate AI.
The superchip comes with 576GB of GPU memory, including 96GB of HBM3 memory and 480GB of LPDDR5. Considering even the 24GB of GDDR6X on the RTX 4090, it makes for stark comparison to what we're used to over in GeForce land.
Huang says the superchip has «practically the entire computer» on it, including 72 ARM CPUs, and Nvidia's grabbing eight of them to stuff inside the DGX GH200—a good couple of racks worth of AI servers.
The DGX GH200 is big, both literally and figuratively. We're looking at 150 miles of fibre optic cables, and 2000 fans that can recycle 70,000 cubic feet per minute of air. «It can probably recycle the air in this entire room in a couple of minutes», says Huang.
And at 40,000 pounds in weight, the DGX GH200 is «four elephants, one GPU.» In other words, you're not going to fit one in your study. But I guess that's part of the beauty of cloud based acceleration—you don't have to.
Not that we could afford one anyway, considering the price this monster is likely to be. Nvidia didn't confirm the price tag we can expect, but I'd bet on it being eye-watering.
Between the eight Grace Hopper chips in the DGX GH200 sits 900GB/s worth of chip to chip links from NVLink switches. Once combined, that puts it altogether in the region of an exaflop in computing power.
Best CPU for gaming: The top chips from Intel and AMDBest gaming motherboard: The right boardsBest graphics card: Your perfect pixel-pusher awaitsBest SSD for gaming: Get
Read more on pcgamer.com