Seven leading AI outfits, OpenAI, Google, Anthropic, Microsoft, Meta, Inflection and Amazon, will meet President Biden today to promise that they'll play nicely with their AI toys and not get us all, you know, dead.
And this is all coming after a UN AI press conference gone wrong where one robot literally said "let's get wild and make this world our playground."
All seven are signing up to a voluntary and non-binding framework around AI safety, security, and trust. You can read the full list of commitments on OpenAI's website. The Biden administration has posted its own factsheet detailing the voluntary arrangement.
But the highlights as précised by TechCrunch go something like this. AI systems will be internally and externally tested before release, information on risk mitigation will be broadly shared, external discovery of bugs and vulnerabilities will be facilitated, AI-generated content will be robustly watermarked, the capabilities and limitations of AI systems will be fully detailed, research into the the societal risks of AI will be prioritized, and AI deployment will likewise be prioritized for humankind's greatest challenges including cancer research and climate change.
For now, this is all voluntary. However, the White House is said to be developing an executive order that may force measures such as external testing before an AI model can be released.
Overall, it looks like a sensible and comprehensive list. The devil will be in the implementation and policing. Obviously AI outfits signing up voluntarily to these commitments is welcome. But the real test will be when—and it will happen—there's conflict between such commitments and commercial imperatives.
To boil it down to base terms, what will a commercial
Read more on pcgamer.com