In addition to the MI300X for AI, AMD is also announcing that its Instinct MI300A APU has entered volume production and is expected to offer the world's fastest HPC performance when it launches next year.
We have waited for years for AMD to finally deliver on the promise of an Exascale-class APU and the day is nearing as we move closer to the launch of the Instinct MI300A. Today, AMD confirmed that the MI300A APU entered volume production this quarter and is on the path to becoming the world's fastest HPC solution when it becomes available in 2024.
The AMD Instinct MI300A APU is a combination of various architectures and interconnect tech with Zen 4, CDNA 3, and 4th Gen Infinity architecture being at the forefront. Some of the highlights of the MI300A APUs include:
The packaging on the MI300A is very similar to the MI300X except it makes use of TCO-optimized memory capacities & Zen 4 cores. So let's get down to the details of this exascale horsepower for next-gen HPC and AI data centers.
One of the active dies has two CDNA 3 GCDs cut out and replaced with three Zen 4 CCDs which offer their separate pool of cache and core IPs. You get 8 cores and 16 threads per CCD so that's a total of 24 cores and 48 threads on the active die. There's also 24 MB of L2 cache (1 MB per core) and a separate pool of cache (32 MB per CCD). It should be remembered that the CDNA 3 GCDs also have the L2 cache separate.
For the GPU side, AMD has enabled a total of 228 Compute Units based on the CDNA 3 architecture which equals 14,592 cores. That's 38 Compute Units per GPU chiplet. Rounding up some of the highlighted features of the AMD Instinct MI300 Accelerators, we have:
Coming to the performance figures, AMD once again compared the MI300A against
Read more on wccftech.com