When it comes to artificial intelligence, one of the most commonly debated issues in the technology community is safety — so much so that it has helped lead to the ouster of OpenAI's co-founder Sam Altman, according to Bloomberg News.
And those concerns boil down to a truly unfathomable one: Will AI kill us all? Allow me to set your mind at ease: AI is no more dangerous than the many other existential risks facing humanity, from supervolcanoes to stray asteroids to nuclear war.
I am sorry if you don't find that reassuring. But it is far more optimistic than someone like the AI researcher Eliezer Yudkowsky, who believes humanity has entered its last hour. In his view, AI will be smarter than us and will not share our goals, and soon enough we humans will go the way of the Neanderthals. Others have called for a six-month pause of AI progress, so we humans can get a better grasp of what's going on.
AI is just the latest instantiation of the many technological challenges humankind has faced throughout history. The printing press and electricity involved pluses and misuses too, but it would have been a mistake to press the “stop” or even the “slow down” buttons on either.
AI worriers like to start with the question: “What is your ‘p' [probability] that AI poses a truly existential risk?” Since “zero” is obviously not the right answer, the discussion continues: Given a non-zero risk of total extinction, shouldn't we be truly cautious? You then can weigh the potential risk against the forthcoming productivity improvements from AI, as one Stanford economist does in a recent study. You still end up pretty scared.
One possible counterargument is that we can successfully align the inner workings of AI systems with human interests. I am
Read more on tech.hindustantimes.com