Twitter is extending its crowdsourced, fact-checking feature to images, after a tweet that used an AI-generated photo to lie about an explosion at the Pentagon went viral.
Twitter's Community Notes feature lets contributors flag a tweet that may be misleading and explain what they think is wrong with it. Highly rated notes are publicy affixed to the tweet. With the new Notes on Media option, contributors can flag images as well as text. "Notes attached to an image will automatically appear on recent and future matching images," Twitter says.
To participate, contributors will need to have an Impact Score(Opens in a new window) of 10 or above, at which point they'll see an option on some tweets to mark a note as "About the image."
"This option can be selected when you believe the media is potentially misleading in itself, regardless of which Tweet it is featured in," Twitter says.
Notes on Media currently supports tweets with a single image; Twitter is working to expand it to videos, as well as tweets with multiple images and videos.
Notes on Media is “currently intended to err on the side of precision when matching images, which means it likely won’t match every image that looks like a match to you. We will work to tune this to expand coverage while avoiding erroneous matches," Twitter added.
This comes shortly after an account using the @BloombergFeed handle tweeted an AI-generated image that it claimed to show an explosion near the Pentagon. It was fake and quickly debunked, but the use of the Bloomberg name and the presence of a blue checkmark created confusion, and highlighted why stripping verification from those who refused to pay for Twitter Blue can be problematic in emergency situations.
AI images are
Read more on pcmag.com