AMD's Ryzen AI CPUs outshine Intel's Core Ultra chips in new AI benchmarks which showcase LLMs & GenAI workloads.
AMD was the first to enter the AI PC space with its first-gen Ryzen AI CPUs codenamed Phoenix which was introduced last year. The company has since launched its refreshed Ryzen AI lineup known as Hawk Point which offers enhanced "XDNA" NPUs, delivering a 60% boost in AI TOPS. It looks like AMD has put a lot of work into software optimizations for client-side & localized AI work-loads as demonstrated in new benchmarks published by the company.
In the new tests, AMD emphasizes running LLMs on your CPUs locally which is made possible with a range of models including LLama 2, Mistral AI_, code llama, and RAG. Having a localized AI model running on your PC means that you have more privacy than the models on online cloud platforms, saving you subscription fees without requiring an online connection. The company is pushing more into this space with its recent guide on how to set up your own local AI chat bot which rivals NVIDIA Chat with RTX chatbot.
For performance testing, AMD uses its Ryzen 7 7840U APU at 15W and compares it against the Intel Core Ultra 7 155H at 28W. Both chips are running with 16 GB of LPDDR5-6400 memory & the latest driver packages.
First up, we have Mistral Instruct 7B LLM where the AMD Ryzen 7 7840U CPU completes the AI processing in just 61% of the time compared to the Intel offering while Llama 2 7B chat is even faster with the Ryzen AI chip completing the task in 54% of the time.
To simplify things, the AMD Ryzen 7 7840U (15W) CPU can offer up to 14% faster performance in LLama v2 Chat 7B (Q4 K M) and 17% faster performance in Mistral Instruct 7b (Q4 K M). The time to first token speeds are respectively 79% faster in LLama v2
Read more on wccftech.com