Researchers at the University of California and Intel have developed a complex algorithm that leverages AI and some clever routines to extrapolate new frames, with claims of lower input latency than current frame generation methods, all while retaining good image quality. There's no indication that Intel is planning to implement the system for its Arc GPUs just yet but if the work is continued, we'll probably have some Intel-powered frame generation in the near future.
Announced at this year's Siggraph Asia event in Australia (via Wccftech), a group of researchers from the University of California was sponsored and supported by Intel to develop a system that artificially creates frames to boost the performance of games and other applications that do rendering.
More commonly known as frame generation, we've all been familiar with this since Nvidia included it with its DLSS 3 package in 2022. That system uses a deep learning neural network, along with some fancy optical flow analysis, to examine two rendered frames and produce an entirely new one, which is inserted in between them. Technically, this is frame interpolation and it's been used in the world of TVs for years.
Earlier this year, AMD offered us its version of frame generation in FSR 3 but rather than relying on AI to do all the heavy lifting, the engineers developed the mechanism to work entirely through shaders.
However, both AMD and Nvidia have a bit of a problem with their frame generation technologies, and it's an increase in latency between a player's inputs and then seeing them in action on screen. This happens because two full frames have to be rendered first before the interpolated one can be generated and then shoehorned into the chain of frames.
The new
Read more on pcgamer.com