Nvidia is, at this point, so far ahead in the AI hardware game that competing companies are doing the most unlikely of things—working together to keep up and beat the jolly green giant at its own game.
Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco and Broadcom have announced the formation of the catchily titled «Ultra Accelerator Link Promoter Group», with the goal of creating a new interconnect standard for AI accelerator chips.
Nvidia's proprietary NVLink interconnect tech is used to connect across multiple chips for demanding AI tasks, and it's mighty fast, particularly when stacked together on the latest AI hardware. Nvidia Blackwell GPUs support up to 18 NVLink 100 GB/s connections for a total bandwidth of 1.8 TB/s per single GPU.
However, because it's proprietary tech it creates a closed ecosystem. Whichever link standard is used dictates the hardware, and that's what this new group aims to address.
The UALink Promoter group's goal is to create a new open standard that allows multiple companies to develop AI hardware using the new connection (via Ars Technica), much like Compute Express Link, an open standard high-speed connection developed by Intel for linking CPUs and devices in data centers.
The first version of the new standard, UALink 1.0, is said to be based on technologies like AMD's Infinity Architecture and is expected to improve speed and reduce latency compared to existing methods.
There is a catch, however. Products making use of the new interconnect tech are said to become available in the next two years, giving Nvidia quite a head start.
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
And in the meantime, Nvidia's overall AI hardware dominance shows no signs of waning. With huge orders for its previous generation H100 GPUs and tens of thousands of its latest Blackwell GPUs already sold before the AI chips were even announced, any company attempting to disrupt Nvidia's
Read more on pcgamer.com