For a hot minute last week it looked like we were already on the brink of killer AI. Several news outlets reported that a military drone attacked its operator after deciding the human stood in the way of its objective. Except it turned out this was a simulation. And then it transpired the simulation itself didn't happen. An Air Force colonel had mistakenly described a thought experiment as real at a conference.
Even so, fibs travel halfway around the world before the truth laces up its boots and the story is bound to seep into our collective, unconscious worries about AI's threat to the human race, an idea that has gained steam thanks to warnings from two “godfathers” of AI and two open letters about existential risk.
Fears deeply baked into our culture about runaway gods and machines are being triggered — but everyone needs to calm down and take a closer look at what's really going on here.
First, let's acknowledge the cohort of computer scientists who have long believed AI systems, like ChatGPT, need to be more carefully aligned with human values. They propose that if you design AI to follow principles like integrity and kindness, they are less likely to turn around and try to kill us all in the future. I have no issue with these scientists.
But in the last few months, the idea of an extinction threat has become such a fixture in public discourse that you could bring it up at dinner with your in-laws and have everyone nodding in agreement about the issue's importance.
On the face of it, this is ludicrous. It is also great news for leading AI companies, for two reasons:
1) It creates the specter of an all-powerful AI system that will eventually become so inscrutable we can't hope to understand it. That may sound scary, but it
Read more on tech.hindustantimes.com