With a new executive order, the Biden administration plans to reach deep into the non-legislative compartments of its policy toolbox to manage the risks of increasingly powerful AI systems while also promoting their development.
The White House revealed the outlines of this agenda in a fact sheet published this morning; the order itself had not been posted by noon ET. It leads off with a plan to leverage the Defense Production Act to require companies developing AI models that might pose “a serious risk to national security, national economic security, or national public health and safety” to loop in the government when training those models and then share their red-team safety tests.
(Presumably, skipping the basic security step of having researchers try to exploit a system will not be an option for these developers, but only the full text of the EO will clarify that.)
The outline says the order will also direct the National Institute of Standards and Technology (NIST), which published an AI risk-management framework, to write standards for red-teaming AI models. The Departments of Homeland Security and Energy will then oversee the application of these standards to critical infrastructure.
The Department of Commerce, meanwhile, will “develop guidance for content authentication and watermarking to clearly label AI-generated content”—although private companies such as Google are developing watermarks to stamp the output of AI, there’s no industry-wide symbol for people to spot.
The outline also calls on federal agencies to set better examples, with help from the National Science Foundation, in their own use of “privacy-preserving” techniques that can allow the training of AI models without revealing personal data.
The
Read more on pcmag.com