Of course Nvidia was going to have at least one supercomputer; I mean it's been making data centre GPUs for a million years now. And I do know that DLSS requires a certain amount of pre-training to be able to make its funky upscaling algorithms as stable and effective as they are. But this dumb-ass Dave didn't realise until last week that Nvidia has a dedicated supercomputer, «with 1000s of our latest and greatest GPUs» that has been running full-time for the past six years just to improve the quality of DLSS.
At last week's RTX Blackwell Editor's Day in glorious Las Vegas*, deep in the midst of CES 2025, Brian Catanzaro, Nvidia's VP of applied deep learning research, took to the stage to talk through DLSS 4 and the many changes and challenges it brings with it.
As well as the game-changing switch from convolutional neural networks to the new transformer model for DLSS 4, the other thing to catch my ear was the aside Catanzaro made about how Nvidia trains its models.
«How is it that we've been able to make progress with DLSS over the years?» He asks. «You know, it's been a six year, continuous learning process for us.
»Actually, we have a big supercomputer at Nvidia, with many 1000s of our latest and greatest GPUs, that is running 24/7, 365 days a year improving DLSS. And it's been doing that for six years."
Maybe it's me being utterly naïve, but I didn't realise the amount of resource that was being dedicated to making its upscaling solution better over time.
I figured Nvidia might give its DLSS gang some dedicated time with a multi-million dollar machine for training purposes every now and then, but I didn't realise it was living rent-free inside the mind of its own supercomputer.
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
A supercomputer purely dedicated to grinding away on images that just aren't quite good enough.
«What we're doing during this process,» Catanzaro continues, «is we're analysing failures.
Read more on pcgamer.com