The developers behind ChatGPT, OpenAI, including CEO Sam Altman, expressed their concern in a blogpost on Monday, that artificial intelligence (AI) could surpass the "expert skill level" of humans in various domains within the next 10 years, leading to the emergence of "superintelligence" that outperforms other powerful technologies. OpenAI officials stated, "Superintelligence will be more powerful than any other technology humanity has had to grapple with in the past." They further emphasized the importance of managing risks associated with this development in order to achieve a significantly more prosperous future. Business Insider reported.
The introduction of ChatGPT and similar generative AI tools has raised concerns among industry leaders regarding the potential disruption to society, including job displacement, the spread of misinformation, and an increase in criminal activities. This has led to intensified competition between major companies like Microsoft and Google, as they engage in an AI arms race.
Due to these concerns, there have been calls for AI regulation. OpenAI leaders highlighted the need for a proactive approach to address the technology's potential risks, drawing comparisons to historical examples such as nuclear power and synthetic biology. While the risks of current AI technologies need to be mitigated, they stressed that superintelligence would require special treatment and coordination.
CEO Sam Altman recently appeared before US Congress committee to address lawmakers' concerns about the lack of regulations governing AI development. In their blog post, Altman and his colleagues proposed the idea of regulating AI progress above a certain capability through measures such as audits and safety
Read more on tech.hindustantimes.com