When Intel’s “Meteor Lake” processors launch, they’ll feature not just CPU cores spread across two on-chip tiles, alongside an on-die GPU portion, but also the company’s first-ever Neural Processing Unit (NPU) devoted to AI workloads.
According to Intel, the NPU means generative AI programs such as Stable Diffusion, or today's chatbots working off of a locally hosted language model, can run natively on Meteor Lake laptops, without suffering from slow performance, poor quality, or crippling power demands.
“At Intel, our goal is to democratize AI, building it around standardized interfaces, making it accessible to anyone, anywhere in the world,” said Tim Wilson, Intel’s general manager for SOC design.
The resulting NPU promises to offer as much as an eight-times power-efficiency improvement over Intel’s previous chip generation when it comes to AI-based workloads. The company previewed the technology in action back at Computex 2023, at which time the NPU was alternately referred to as the VPU, or “Versatile Processing Unit.” Now the company is sharing more details, including the architecture.
You can already use a PC's GPU to power AI workloads, but doing so can guzzle electricity, which isn't ideal for a computing environment like a battery-constrained laptop. (It's also potentially wasteful in a desktop environment, of course.) Intel's solution is the NPU, which has been "optimized for long-sustained, power moderate, intensive applications,” according to Tom Petersen, an Intel Fellow specializing in computer graphics. "So if you’re doing a sustained workload, then the NPU is for you." An example of this kind of NPU-optimal task might be using an AI image generator to churn out numerous pictures from a locally hosted
Read more on pcmag.com