Whether or not calls for pausing AI development succeed (spoiler: they won't), artificial intelligence is going to need regulation. Every technology in history with comparably transformational capabilities has been subject to rules of some sort. What that regulation should look like is going to be an important and complicated problem, one I and others will be writing a lot about it in the months and years to come.
Before we even get to the content of the regulation needed, however, there's a crucial threshold question that needs to be addressed: Who should regulate AI? If it's government, which part of government, and how? If it's industry, what are the right kinds of mechanisms to balance innovation with safety?
I'd like to start suggesting some basic principles that should guide our approach, starting with government regulation. I'll save the question of private sector self-regulation for a future column. (Disclosure: I advise a number of companies that are involved in AI, including Meta.)
Let's begin with the specter that haunts the AI debate: The possibility that AI might pose an existential threat to human society. In a well-publicized 2022 survey of AI researchers, nearly half of respondents said that there was a 10% or greater chance that AI would eventually produce an “extremely bad” outcome, along the lines of human extinction.
There are some caveats. Only 17% of researchers contacted returned the survey, and it may be that the most worried researchers were more likely to respond. And even so, a quarter of those who answered put the risk of an extremely bad outcome at 0%. Nevertheless, the results are striking.
If AI poses an existential threat to human survival, then in the real world, that would call for government
Read more on tech.hindustantimes.com