AMD's Strix Point APUs showcase a strong performance advantage in AI LLM workloads against Intel's Lunar Lake offerings.
The demand for higher performance in AI workloads has not only forced many companies to bring their own specialized hardware to the market but also made the competition more fierce. Since LLMs(large language models) have evolved significantly, the need for faster hardware is also increasing.
To tackle this, AMD introduced its own AI-oriented processors for mobile platforms, known as Strix Point, a while back. In the latest blog post, the company claims that its Strix Point APUs can have a big lead over its rivals while decreasing the latency for quicker output. According to AMD, the Ryzen AI 300 processors can deliver higher Tokens per second than Intel's Lunar Lake chips, which are Intel's special mobile chips for AI workloads.
As per the comparison, the Ryzen AI 9 HX 375 offers up to 27% higher performance in consumer LLM applications in LM Studio than the Intel Core Ultra 7 258V. The latter isn't the fastest in the Lunar Lake lineup, but it's surely close to the higher-end Lunar Lake CPUs since the core/thread count remains the same except for the core clocks.
The LM Studio is AMD's consumer-friendly tool built on the llama.cpp that doesn't require its users to learn the technical side of the LLMs. Llama.cpp is a framework that is optimized for x86 CPUs and uses AVX2 instructions. While the framework doesn't need a GPU to run LLMs, it can surely be accelerated using a GPU.
In the latency department, the Ryzen AI 9 HX 375 can deliver up to 3.5x lower latency than its rival and can achieve up to 50.7 tk/s vs 39.9 tk/s by Core Ultra 7 258V in Meta Llama 3.2 1b Instruct.
As both Intel Lunar Lake and Strix
Read more on wccftech.com