In a Q&A session at this year's Computex event, Nvidia CEO Jen-Hsun Huang was asked whether AI will be used to generate games' graphics directly, helping the traditional rasterizing method. After pointing out that neural graphics were already in use, through the likes of frame generation, Huang went on to state that AI would go on to infuse games and PCs, creating high-resolution objects, textures, and characters.
«We already use the idea of neural graphics,» said Jen-Hsun Huang in the Q&A session. «We can achieve very high-quality ray tracing, half tracing 100% of the time and still achieve excellent performance. We also generate frames between frames, not interpolation but frame generation. And so not only that, we generate pixels, we also generate frames.»
Most PC gamers, especially those with an Nvidia graphics card in their rig, will know that AI is currently leveraged in games via DLSS—initially just an upscaling system but now comprising a frame generation system and a neural denoiser for cleaning up ray-traced images. In the case of DLSS Super Resolution, the upscaling isn't done by AI. That's handled by a normal shader routine but the resulting image is then scanned and corrected by a neural network.
In the case of DLSS Frame Generation, two previously rendered images, along with some other information from the rendering pipeline, get fed into a different neural network. This one has been trained on how motion affects images and the result is an entirely new frame, that gets inserted in between the other two frames.
It doesn't have to be a completed frame that can be upscaled or generated in this way, as textures are also a 2D grid of pixels. In theory, any data array could be processed by AI and then improved, making it higher in resolution. And that's exactly what Huang was referring to—taking meshes and textures, low in detail, and using AI to create better versions of them.
Huang agreed, saying «The future will even generate textures and generate
Read more on pcgamer.com