The White House has reached a voluntary agreement with the top tech companies to prevent their AI technologies from wreaking havoc on society.
The Biden administration announced(Opens in a new window) the agreement today amid growing fears that generative AI programs could fuel misinformation and take jobs away from humans.
Amazon, Anthropic, Google, Inflection AI, Meta, Microsoft, and OpenAI have agreed to have independent experts test their AI programs for safety before public release. They will also develop ways to essentially watermark AI-generated content to prevent the public from falling for deepfakes and other AI-created misinformation.
The companies also vowed to invest in cybersecurity to guard against their proprietary AI code, including the model weights, from being stolen or leaked to the public.
“These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI —safety, security, and trust,” the White House says.
But other aspects of the agreement stop short of regulating the AI programs. For example, the White House says the seven companies will research the risks of AI, “including on avoiding harmful bias and discrimination, and protecting privacy.” But there’s no mention of the same companies refraining from scraping public data to train their AI models—a practice that’s drawing regulatory scrutiny from the FTC and is the subject of several lawsuits.
Sen. Mark Warner, chairman of the Select Committee on Intelligence, said in a statement that AI vendors "frequently talk about their commitment to security and safety, [but] we have repeatedly seen the expedited release of products that are exploitable, prone to generating
Read more on pcmag.com