One of many concerns about accelerating AI development is the risk it poses to human life. The worry is real enough that numerous leading minds in the field have warned against it: More than 300 AI researchers and industry leaders recently issued a statement asking someone (except them, apparently) to step in and do something before humanity faces—and I quote—«extinction.» Skynet scenarios are usually the first thing that leaps to mind when the subject comes up, thanks to the popularity of blockbuster Hollywood films. Most experts, though, seem to think the greater danger lies in, as professor Ryan Calo of the University of Washington School of Law put it, AI's role in «accelerating existing trends of wealth and income inequality, lack of integrity in information, & exploiting natural resources.»
But it seems like a Skynet-style apocalyptic end of the world might be more plausible than some people thought. During a presentation at the Royal Aeronautical Society's recent Future Combat Air and Space Capabilities Summit, Col Tucker «Cinco» Hamilton, commander of the 96th Test Wing's Operations Group and the US Air Force's Chief of AI test and operations, warned against an over-reliance on AI in combat operations because sometimes, no matter how careful you are, machines can learn the wrong lessons.
Tucker said that during a simulation of a suppression of enemy air defense [SEAD] mission, an AI-equipped drone was sent to identify and destroy enemy missile sites—but only after final approval for the attack was given by a human operator. That seemed to work for a while, but eventually the drone attacked and killed its operator, because the operator was interfering with the mission that had been «reinforced» in its AI training:
Read more on pcgamer.com