Political ads on Google will soon need to clearly disclose if they contain AI-generated content, such as deepfakes of a presidential candidate uttering something they never said in real life.
The company plans on instituting the new requirements in mid-November, according to a Google support document published on Wednesday.
It’ll apply to image-, video-, and audio-based ads. If they contain “synthetic content that inauthentically depicts real or realistic-looking people or events,” then Google says the ad will need to prominently carry a disclosure mentioning the AI-generated elements.
“Given the growing prevalence of tools that produce synthetic content, we’re expanding our policies a step further to require advertisers to disclose when their election ads include material that’s been digitally altered or generated,” the company told PCMag.
The only exception is for political ads that feature minor alterations from AI-editing tools. This could include resizing images, brightening the scene, or making background edits. Then the political ad doesn’t need to carry the disclosure.
The requirement comes as some election groups have already been running AI-generated political ads. In June, a political action committee for Republican candidate Ron DeSantis ran an ad featuring an AI voice mimicking Donald Trump to attack him. The AI voice mimics Trump's voice when reading a social media post he actually wrote on Truth Social. Yet the ad carried no disclosure about the AI-nature of the voice.
In April, the Republican National Committee also ran a political ad attacking President Biden that featured numerous AI-generated images depicting the alleged crises that would follow if he was reelected to a second term.
The images
Read more on pcmag.com