Runaway artificial intelligence has been a science fiction staple since the 1909 publication of E. M. Forster's The Machine Stops, and it rose to widespread, serious attention 2023. The National Institute for Standards and Technology released its AI Risk Management Framework in January 2023. Other documents followed, including the Biden administration's Oct. 30 executive order Safe, Secure, and Trustworthy Artificial Intelligence, and the next day, the Bletchley Declaration on AI Safety signed by 28 countries and the European Union.
As a professional risk manager, I found all these documents lacking. I see more appreciation for risk principles in fiction. In 1939, author Isaac Asimov got tired of reading stories about intelligent machines turning on their creators. He insisted that people smart enough to build intelligent robots wouldn't be stupid enough to omit moral controls — basic overrides deep in the fundamental circuitry of all intelligent machines. Asimov's first rule is: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Regardless of the AI's goals, it is forbidden to violate this law.
Or consider Arthur C. Clarke's famous HAL 9000 computer in the 1968 film, 2001: A Space Odyssey. HAL malfunctions not due to a computer bug, but because it computes correctly that the human astronauts are reducing the chance of mission success — its programmed objective. Clarke's solution was to ensure manual overrides to AI, outside the knowledge and control of AI systems. That's how Frank Bowman can outmaneuver HAL, using physical door interlocks and disabling HAL's AI circuitry.
While there are objections to both these approaches, they pass the first risk management test. They
Read more on tech.hindustantimes.com