An AI-enabled drone decided to kill its human operator during a mission simulation because it proved to be the best way to score more points.
As the Royal Aeronautical Society reports(Opens in a new window), the simulated mission was detailed in a presentation by Col Tucker ‘Cinco’ Hamilton, Chief of AI Test and Operations at USAF, during the RAeS Future Combat Air & Space Capabilities Summit held last month.
During the Suppression of Enemy Air Defense (SEAD) mission, the drone was tasked with identifying and destroying Surface-to-Air-Missile (SAM) sites and scored points for doing so. Sometimes the drone's human operator would decide destroying a target was a "no-go," but the AI controlling the drone viewed such decisions as interference and decided to remove the interference by attacking the operator.
As Hamilton explains:
"We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."
That's not the end of the story, though, as they attempted to solve the problem by training the AI to view killing the operator as undesirable. If it did, points would be lost. In response, the AI turned its attack on the communication tower used by the operator to talk to the drone. The AI decided a lack of communication would allow it to continue destroying SAM sites and earning points without interference and without needing to kill the operator.
Hamilton
Read more on pcmag.com