The first GPU company to offer it was Nvidia in 2022, followed by AMD one year later, and now Intel has joined in the fun. I am, of course, talking about frame generation and while none of the systems are perfect, they all share the same issue: increased input latency. However, researchers at Intel have developed a frame generation algorithm that adds no lag whatsoever, because it's frame extrapolation.
If you've a mind for highly technical documents, you can read the full details about how it all works at one of the researcher's GitHub. Just as with all rendering technologies, this one has a catchy name and suitable initialisation: G-buffer Free Frame Extrapolation (GFFE). To understand what it's doing differently to DLSS, FSR, and XeSS-FG, it helps to have a bit of an understanding of how the current frame generation systems work.
AMD, Intel, and Nvidia have different algorithms but they take the same fundamental approach: Render two frames in succession and store both of them in the graphics card's VRAM, rather than displaying them.
Then, in place of rendering another frame, the GPU either runs a bunch of compute shaders (as per AMD's FSR) or some AI neural networks (Nvidia's DLSS and Intel's XeSS) to analyse the two frames for changes and motion, and then create a frame based on that information. This generated frame is then sequenced between the two previously rendered frames, and then they're sent off to the monitor in that order for display.
While none of the three technologies produce absolutely perfect frames every time, more often than not, you don't really notice them because they only appear on screen for a fraction of a second, before a normally rendered frame takes its place. However, what one can easily notice, is the increased input lag.
Game engines poll for input changes at fixed time intervals and then apply any changes to the next frame to be rendered. Generated frames won't have such information applied to them and because two 'normal' frames
Read more on pcgamer.com