YouTube, the video platform owned by Alphabet Inc.'s Google, will soon require video makers to disclose when they've uploaded manipulated or synthetic content that looks realistic — including video that has been created using artificial intelligence tools.
The policy update, which will go into effect sometime in the new year, could apply to videos that use generative AI tools to realistically depict events that never happened, or show people saying or doing something they didn't actually do. “This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials,” Jennifer Flannery O'Connor and Emily Moxley, YouTube vice presidents of product management, said in a company blog post Tuesday. Creators who repeatedly choose not to disclose when they've posted synthetic content may be subject to content removal, suspension from the program that allows them to earn ad revenue, or other penalties, the company said.
We are now on WhatsApp. Click to join
When the content is digitally manipulated or generated, creators must select an option to display YouTube's new warning label in the video's description panel. For certain types of content about sensitive topics — such as elections, ongoing conflicts and public health crises — YouTube will display a label more prominently, on the video player itself. The company said it would work with creators before the policy rolls out to make sure they understood the new requirements, and is developing its own tools to detect when the rules are violated. YouTube is also committing to automatically labeling content that has been generated using its own AI tools for creators.
Google — which both
Read more on tech.hindustantimes.com