TensorWave, the high-end cloud service provider (CSP) focusing on AMD's AI hardware, has announced that it is developing the world's largest GPU clusters based on Team Red's magic, powered by the Instinct MI300X, MI325X & MI350X accelerators.
TensorWave has made rounds over the internet, primarily due to its optimistic approach towards AMD's AI accelerators and how the firm's "AI compute" portfolio is all around Team Red's products. TensorWave has made a primary aim of "democratizing AI", by siding with AMD and promoting its "Instinct" lineup of AI accelerators to potential customers.
Now, according to an announcement by TensorWave's CEO Darrick Horton(via LinkedIn), the company is on its way towards building the world's "largest" AMD GPU clusters with Instinct MI300X, MI325X and next-gen MI350X accelerators.
TensorWave plans to achieve a power consumption around a "whopping" one Gigawatt with its upcoming AI clusters, revealing that we should expect serious compute power out of TensorWave's future projects, although the firm has kept further details under wraps for now. Another interesting fact to mention is that TensorWave plans to leverage the newly introduced "Ultra Ethernet" inter-connectivity standard, which is said to be a superior implementation in the realm of AI clusters.
When you look at how massively NVIDIA has become in the AI markets, it does create a "bullish" stance for companies like AMD, who are in the pursuit of somehow filling in the gaps created by Team Red. Although it won't be wrong to say that we are seeing a form of "monopolized markets," AMD is still giving it all when it comes to being competitive and constantly refining its AI
Read more on wccftech.com