The people making artificial intelligence say that artificial intelligence is an existential threat to all life on the planet and we could be in real trouble if somebody doesn't do something about it.
«AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI,» the prelude to the Center for AI Safety's Statement on AI Risk states. «Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks.
»The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI's most severe risks seriously."
And then, finally, the statement itself:
«Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.»
It's a real banger, alright, and more than 300 researchers, university professors, institutional chairs, and the like have put their names to it. The top two signatories, Geoffrey Hinton and Yoshua Bengio, have both been referred to in the past as «godfathers» of AI; other notable names include Google Deepmind CEO (and former Lionhead lead AI programmer) Demis Hassabis, OpenAI CEO Sam Altman, and Microsoft CTO Kevin Scott.
It's a veritable bottomless buffet of big brains, which makes me wonder how they seem to have collectively overlooked what I think is a pretty obvious question: If they seriously think their work threatens the «extinction» of humanity, then why not, you know, just stop?
Maybe they'd say that they intend to be careful, but that others will be less scrupulous. And there are
Read more on pcgamer.com