Don't panic, the folk at CERN aren't hurling graphics cards into one another beneath Switzerland to see what happens when GPU particles collide. They're actually using Nvidia's graphics silicon to cut down the amount of energy it needs to compute what happens when the Large Hadron Collider (LHC) collides other stuff.
Particles and things. Beauty quarks. Y'know, science stuff.
It's no secret that, while the humble GPU was originally conceived for the express purpose of chucking polygons around a screen in the most efficient way, it turns out the parallel processing prowess of modern graphics chips makes for an incredibly powerful tool in the scientific community. And an incredibly efficient one, too. Indeed A Large Ion Collider Experiment (ALICE) has been using GPUs in its calculations since 2010 and its work has now encouraged their increased use in various LHC experiments.
The potential bad news is that it does mean there's yet another group desperate for the limited amount of GPU silicon coming out of the fabs of TSMC and Samsung. Though at least this lot will be using it for a loftier purpose than mining fake money coins.
Right, guys? You wouldn't just be mining ethereum on the side now, would you?
On the plus side the CERN candidate nodes are currently using last-gen tech. For the upcoming LHC Run 3—where the machine is recommissioned for a «three-year physics production period» after a three-year hiatus—the nodes are pictured using a pair of AMD's 64-core Milan CPUs alongside two Turing-based Nvidia Tesla T4 GPUs.
Okay, no-one tell them how much more effective the Ampere architecture is in terms of straight compute power, and I think we'll be good. Anyways, as CERN calculates, if it was just using purely CPU-based
Read more on pcgamer.com