Slapping one of the most intense Computational Fluid Dynamics (CFD) simulations ever on the world's largest single-node GPU server sounds like a fun weekend activity, honestly. That's exactly how Dr. Moritz Lehmann spent their time recently with a Concorde focused FluidX3D simulation, and the results are intensely beautiful.
There are a few configurations of the GigaIO SuperNODE server. One has 24 Nvidia's A100 graphics cards stuffed in it, whereas the particular design ProjectPhysX got to use contains 32 AMD Instinct MI210 64GB GPUs. And damn do they slap.
You can check out the benchmarks over on Dr. Lehmann's ProjectPhysX Github page, but the general gist is that it took just 33 hours to run the simulation—including «29 hours for 67k timesteps at 2976×8936×1489 (12.4mm)³ cells, plus 4h for rendering 5×600 4K frames.» If that all sounds a little complicated to you, all you need to know is that this was a damn complex simulation containing 40 billion cells overall, and Flight Sim has nothing on it.
The very fact it was able to run the simulation in a single weekend is a marvel unto itself, as the original poster notes «Commercial CFD would need years for this».
For those who overlooked the intensely pixelated scene caused by Reddit's compression algorithm, and headed for a look at the 4K version on YouTube it's clear something went very right.
Best CPU for gaming: The top chips from Intel and AMDBest gaming motherboard: The right boardsBest graphics card: Your perfect pixel-pusher awaitsBest SSD for gaming: Get into the game ahead of the rest
Thanks to OpenCL (Open Computing Language), ProjectPhysX says «FluidX3D works out-of-the-box with 32-GPU scaling», which made the whole process a lot easier with no code changes
Read more on pcgamer.com