The recent AI craze may have nerfed some upcoming SOCs as chipmakers such as AMD & Intel prioritize NPU over other core IPs.
We have recently seen an AI explosion in the PC segment with all chipmakers talking about the respective capabilities of their chips and platforms. The segment is driven by a range of software innovations and Microsoft's Windows Copilot which has some hefty requirements to support its AI functionality. Chipmakers are now heavily betting on the AI craze and it looks like some have gone outside of their traditional chip development plans to prioritize AI over other parts of their newest SOCs that will be coming to market later this year.
Over at Anandtech forums, it is reported by member, Uzzi38, that AMD's Strix Point APUs which are launching later this year were originally planned to be much different than the chips that we will be getting soon. It is alleged that before AMD dedicated a large AI Engine block for that 3x NPU "XDNA 2" AI performance, the chip had a large SLC (System-Level-Cache) & that would have increased the performance of both the CPU (Zen 5) and iGPU (RDNA 3+) by a great margin. However, that is not happening anymore.
A follow-up comment on this matter was made by adroc_thurston who replied to Uzzi stating that Strix 1 or Strix Point monolithic had 16 MB of MALL cache once before that was dropped. Intel has also invested loads in their upcoming Arrow Lake, Lunar Lake, and Panther Lake chips which will be aiming at the AI PC segment.
These AI blocks will take up large portions of valuable die space that could've been dedicated elsewhere such as higher core counts, higher iGPU counts, wider caches, and more but it looks like the AI PC craze has made chipmakers take a backseat on standard
Read more on wccftech.com