The government published a different kind of help-wanted ad this week, asking for input on how it might craft policies to hold artificial-intelligence systems accountable without holding back the development of useful AI.
The request for comments(Opens in a new window) posted by the National Telecommunications and Information Administration (NTIA) “seeks feedback on what policies can support the development of AI audits, assessments, certifications, and other mechanisms to create earned trust in AI systems.”
The idea here is that before we might learn to stop worrying and love AI (or at least stop having nightmares about it), we need to establish accounting and auditing procedures to track and grade its operations. The NTIA announcement compares this to the accounting rules intended to keep corporate financial statements reliable, although NTIA might also be trying to avoid the often-opaque content-moderation algorithms employed by large social platforms.
Specifically, the feds want to know:
“What kinds of data access is necessary to conduct audits and assessments”;
“How can regulators and other actors incentivize and support credible assurance of AI systems along with other forms of accountability”;
“What different approaches might be needed in different industry sectors—like employment or health care.”
A 31-page document (PDF(Opens in a new window)) notes how last year’s CHIPS And Science Act and the AI Bill of Rights published by the White House emphasize the importance of developing trustworthy AI, and suggests areas worth inspection by AI audits: “harmful bias and discrimination, effectiveness and validity, data protection and privacy, and transparency and explainability.”
To take NTIA up on this invitation, visit
Read more on pcmag.com