There's a new Nvidia architecture in town, and it's a doozy. Blackwell has just been announced by Nvidia's CEO Jensen Huang at GTC and will feature inside the ludicrously large B200 GPU. To call them 'GPUs' would technically be wrong, however. They're dual-GPU packages with a total of 208 billion transistors across them. To put that into perspective, the previous must-have compute chips out of Nvidia, the Hopper H200 and H100, has just over 80 billion transistors. An RTX 4090 has 76.3 million. We're looking at over double that with Blackwell, which makes a lot of sense considering it's dual-wielding GPUs and a new chip-to-chip interconnect.
Blackwell is unfortunately not for gaming. Boo! I'm not sure our bank accounts would be ready for such a mighty thing anyways. Blackwell is mostly intended for rollout within data centres chasing bigger and bigger compute figures. Why? Artificial intelligence, mostly.
But since we're awaiting any news on the next-generation GeForce graphics cards, let's keep in mind which features might end up being transplanted from these mahoosive Blackwell chips into whatever architecture turns up in the next graphics cards for gaming—which could also be Nvidia Blackwell, albeit a stripped-down version, as we won't need all of the gubbins included with the B200/B100.
Let's start with something we probably will see in a GeForce GPU in future: Blackwell features new fifth-generation Tensor Cores. These are accelerators for instructions largely used within AI applications, i.e. inference and training, and the fifth-gen versions are set to bump performance by up to 30 times. The new Tensor Cores include precision formats and an updated Transformer Engine, which was first introduced with Hopper, to accelerate inference and training of large language models.
Since GeForce cards use Tensor Cores for features such as DLSS, and we've already seen fourth-generation Tensor Cores make the leap from Nvidia's enterprise-only Hopper architecture into the
Read more on pcgamer.com