Intel has been demoing its upcoming Meteor Lake CPU at the Computex show (via PC World) and the big noise with the new chip is AI acceleration. More specifically, Meteor Lake comes with a new VPU or «Versatile Processing Unit» which apparently is powerful enough to run Stable Diffusion locally rather than in the cloud.
The VPU is essentially a dedicated inferencing accelerator included in Meteor Lake as a separate tile and designed for AI tasks like computer vision and deep learning.
AI is obviously experiencing explosive growth right now, but the likes of ChatGPT, Midjourney, and DALL-E 2 have thus far run in the cloud courtesy of huge training models. It's hard to see any relevance of a tiny client-based hardware AI accelerator to that, right?
In fact, Intel does see a role for some level of local AI acceleration on PCs. Intel says that the huge costs of rolling out AI apps like ChatGPT to millions upon millions can be reduced by putting some of that processing load on client PCs.
Intel also reckons that local AI acceleration allows for massive distributed scaling, better privacy for users by keeping data local and lower latency because the AI processing is actually happening on your device.
Notionally, that makes sense. But it's not clear how it will translate into real world applications. Certainly for now there's now way to offload any part of ChatGPT's processing to local devices and we've heard of no plans from OpenAI to do anything remotely like that.
However, Intel did show off a version of the Stable Diffusion image generator running entirely locally on a Meteor Lake powered laptop that specifically wasn't connected to the internet. Traditionally you'd want to use a powerful GPU to run Stable Diffusion locally,
Read more on pcgamer.com