A group of experts in the field of artificial intelligence, including some of the companies directly involved in the space and those who stand to benefit commercially, have signed a document saying AI poses a major risk to human life. A statement posted on the Center for AI Safety website says that mitigating the risk of «extinction from AI» should be as significant a priority as other «societal-scale» risks like nuclear war and pandemics.
Dozens of experts in the field of AI have signed a statement on AI's risks, including OpenAI founder and CEO Sam Altman, the CEO of Google DeepMind, Demis Hassabis; Microsoft CTO Kevin Scott and Microsoft chief scientific officer Eric Horvitz; Cambridge University professor of physics Martin Rees; the musician Grimes; and numerous other people from universities, OpenAI, Google, and other institutions around the world.
«Despite its importance, AI safety remains remarkably neglected, outpaced by the rapid rate of AI development. Currently, society is ill-prepared to manage the risks from AI,» the Center for AI Safety said.
The group's charter is to help business leaders and policymakers come up with informed decisions about managing AI risk. The group says some of the risks associated with AI include the potential for malicious actors to «repurpose» AI and use it for making chemical weapons and destabilizing governments.
AI could also be used to spread disinformation, the group said, or lead to a situation where «humanity loses the ability to self-govern and becomes completely dependent on machines.» These are just a few of the risks that the Center for AI Safety believes are real and worth paying attention to.
OpenAI is the company behind the ChatGPT software. Microsoft paid OpenAI a
Read more on gamespot.com