Cerebras Systems has unveiled its third generation wafer scale engine chip, the WSE-3, which offers 900,000 AI-optimized cores built to train up to 24 trillion parameters.
Ever since the launch of its first Wafer Scale Engine (WSE) chip, Cerebras hasn't looked back and its third-generation solution has now been unveiled with unbelievable specifications which should be a given due to its sheer size. As the name suggests, the chip is essentially an entire wafer worth of silicon and this time, Cerebras is betting on the AI craze with some powerful specifications that are highlighted below:
Talking about the chip itself, the Cerebras WSE-3 has a die size of 46,225mm2 which is 57x larger than the NVIDIA H100 which measures 826mm2. Both chips are based on the TSMC 5nm process node. The H100 is regarded as one of the best AI chips on the market with its 16,896 cores & 528 tensor cores but it is dwarfed by the WSE-3, offering an insane 900,000 AI-optimized cores per chip, a 52x increase.
The WSE-3 also has big performance numbers to back it up with 21 Petabytes per second of memory bandwidth (7000x more than the H100) and 214 Petabits per second of Fabric bandwidth (3715x more than the H100). The chip incorporates 44 GB of on-chip memory which is 880x higher than the H100.
Compared to the WSE-2, the WSE-3 chip offers 2.25x higher cores (900K vs 400K), 2.4x higher SRAM (44 GB vs 18 GB), and much higher interconnect speeds, all within the same package size. There are also 54% more transistors on the WSE-3 (4
Read more on wccftech.com