One of the lasting consequences of the Covid-19 pandemic has been a decline of trust in public-health experts and institutions. It is not hard to see why: America botched Covid testing, kept the schools closed for far too long, failed to vaccinate enough people quickly enough, and inflicted far more economic damage than was necessary — and through all this, public-health experts often had the dominant voice.
In their defense, public-health officials are trained to prioritize public safety above all else. And to their credit, many now recognize that any response to a public-health crisis needs to consider the tradeoffs inherent in any intervention. As Dr. Anthony Fauci recently told the New York Times, “I'm not an economist.”
As it happens, I am. And my fear is that we are about to make the same mistake again — that is, trusting the wrong experts — with artificial intelligence.
Some of the greatest minds in the field, such as Geoffrey Hinton, are speaking out against AI developments and calling for a pause in AI research. Earlier this week, Hinton left his AI work at Google, declaring that he was worried about misinformation, mass unemployment and future risks of a more destructive nature. Anecdotally, I know from talking to people working on the frontiers of AI, many other researchers are worried too.
What I do not hear, however, is a more systematic cost-benefit analysis of AI progress. Such an analysis would have to consider how AI might fend off other existential risks — deflecting that incoming asteroid, for example, or developing better remedies against climate change — or how AI could cure cancer or otherwise improve our health. And these analyses often fail to take into account the risks to America and the world if we
Read more on tech.hindustantimes.com