Alphabet Inc.'s Google will soon require that all election advertisers disclose when their messages have been altered or created by artificial intelligence tools.
The policy update, which applies starting mid-November, requires election advertisers across Google's platforms to alert viewers when their ads contain images, video or audio from generative AI — software that can create or edit content given a simple prompt. Advertisers must include prominent language like, “This audio was computer generated,” or “This image does not depict real events” on altered election ads across Google's platforms, the company said in a notice to advertisers. The policy does not apply to minor fixes, such as image resizing or brightening.
The update will improve Google's transparency measures for election ads, the company said, especially given the growing prevalence of AI tools — including Google's — that can produce synthetic content. “It'll help further support responsible political advertising and provide voters with the information they need to make informed decisions,” said Michael Aciman, a Google spokesperson.
Google's new policy doesn't apply to videos uploaded to YouTube that aren't paid advertising, even if they are uploaded by political campaigns, the company said. Meta Platforms Inc., which owns Instagram and Facebook, and X, formerly known as Twitter, don't have specific disclosure rules for AI-generated ads. Meta said it was getting feedback from its fact-checking partners on AI-generated misinformation and reviewing its policies.
Like other digital advertising services, Google has been under pressure to tackle misinformation across its platforms, including false claims about elections and voting that could undermine trust and
Read more on tech.hindustantimes.com