Just a few years ago it was easy to spot, at first glance, that an AI image wasn't real. Edges of items blended together, proportions didn't quite feel right, people had too many fingers, it never got cats right… yet it's now reaching the point that it can be harder to tell. Coming up to the US elections, TechCrunch hosted a talk with AI experts on AI disinformation (misinformation that has direct intent and malice) and Meta's self-regulation policies saw itself in the firing line.
This conversation around disinformation ended up on Meta's practices because Pamela San Martín, Co-chair of the Oversight Board for Meta, was one of the key speakers.
The Oversight, according to its own FAQ, «is a body of experts from around the world that exercises independent judgment and makes binding decisions on what content should be allowed on Facebook and Instagram».
However, just a few questions down the page, it declares that the board is funded directly by Meta, with $280 million in funding over the last five years alone. This declaration of independence, when paired with the knowledge of funding, implies a tension that the other members of the panel picked up.
San Martín, whilst acknowledging the problems of AI and Meta's own need to learn from it, praised AI as a tool for battling AI misinformation.
«Most social media content is moderated by automation and automation uses AI, either to flag certain content to be reviewed by humans, or to flag certain content to be actioned.»
Off the back of this, she also suggested that the best way to combat disinformation isn't always to remove it, but sometimes to inform or label it correctly. Think of the X community notes function and you have a good idea of what that looks like. She also noted that public reports of disinformation are mostly a good tool for public figures and information, and do little to dissuade harm to private individuals.
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer
Read more on pcgamer.com