OpenAI, Anthropic Partner With US AI Safety Institute: If you have been reading about generative artificial intelligence and how rapidly it is improving, you would know that many experts and industrialists have voiced their concerns about how this may prove to be a danger to humanity. Now, to take corrective steps in this direction, two of the leading AI companies, OpenAI and Anthropic, will now let the US government have access to all the major AI models they develop before public release. This is being done to ensure their safety moving forward.
Also Read: These Google users in India are under high risk, government issues warning
OpenAI CEO, Sam Altman, took to X (formerly Twitter) to announce the same. “We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models,” Altman said..
He added that OpenAI considers it important that this happens at the national level. “US needs to continue to lead!” he further said.
Simply put, the US government will be able to work together with AI companies to mitigate potential safety risks that the advanced AI models might bring with them, and then provide feedback.
"Safe, trustworthy AI is crucial for the technology's positive impact. Our collaboration with the US AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment," said Jack Clark, Co-Founder of Anthropic, and Head of Policy.
Also Read: “Never work for an Indian manager”: Microsoft employee based in Europe ‘warns' in Reddit post
The US AI Safety Institute is a part of the US Department of Commerce's National Institute of Standards and Technology (NIST). It is a relatively new institution, created by the Biden administration last year to address the risks of AI. Moving forward, it will also partner with the UK government's AI Safety Institute to help AI companies ensure safety.
Commenting on the new found partnership with OpenAI and Anthropic, Elizabeth Kelly,
Read more on tech.hindustantimes.com