Ever since the poem churning ChatGPT burst on the scene six months ago, expert Gary Marcus has voiced caution against artificial intelligence's ultra-fast development and adoption.
But against AI's apocalyptic doomsayers, the New York University emeritus professor told AFP in a recent interview that the technology's existential threats may currently be "overblown."
"I'm not personally that concerned about extinction risk, at least for now, because the scenarios are not that concrete," said Marcus in San Francisco.
"A more general problem that I am worried about... is that we're building AI systems that we don't have very good control over and I think that poses a lot of risks, (but) maybe not literally existential."
Long before the advent of ChatGPT, Marcus designed his first AI program in high school -- software to translate Latin into English -- and after years of studying child psychology, he founded Geometric Intelligence, a machine learning company later acquired by Uber.
In March, alarmed that ChatGPT creator OpenAI was releasing its latest and more powerful AI model with Microsoft, Marcus signed an open letter with more than 1,000 people including Elon Musk calling for a global pause in AI development.
But last week he did not sign the more succinct statement by business leaders and specialists -- including OpenAI boss Sam Altman -- that caused a stir.
Global leaders should be working to reduce "the risk of extinction" from artificial intelligence technology, the signatories insisted.
The one-line statement said tackling the risks from AI should be "a global priority alongside other societal-scale risks such as pandemics and nuclear war".
Signatories included those who are building systems with a view to achieving
Read more on tech.hindustantimes.com