It is not every day that I read a prediction of doom as arresting as Eliezer Yudkowsky's in Time magazine last week. “The most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances,” he wrote, “is that literally everyone on Earth will die. Not as in ‘maybe possibly some remote chance,' but as in ‘that is the obvious thing that would happen.' … If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”
Do I have your attention now?
Yudkowsky is not some random Cassandra. He leads the Machine Intelligence Research Institute, a nonprofit in Berkeley, California, and has already written extensively on the question of artificial intelligence. I still remember vividly, when I was researching my book Doom, his warning that someone might unwittingly create an AI that turns against us — “for example,” I suggested, “because we tell it to halt climate change and it concludes that annihilating Homo sapiens is the optimal solution.” It was Yudkowsky who some years ago proposed a modified Moore's law: Every 18 months, the minimum IQ necessary to destroy the world drops by one point.
Now Yudkowsky has gone further. He believes we are fast approaching a fatal conjuncure, in which we create an AI more intelligent than us, which “does not do what we want, and does not care for us nor for sentient life in general. … The likely result of humanity facing down an opposed superhuman intelligence is a total loss.”
He is suggesting that such an AI could easily escape from the internet “to build artificial life forms,” in effect waging biological warfare on us. His recommendation
Read more on tech.hindustantimes.com