Seeing might not be believing going forward as digital technologies make the fight against misinformation even trickier for embattled social media giants.
In a grainy video, Ukrainian President Volodymyr Zelenskyy appears to tell his people to lay down their arms and surrender to Russia. The video — quickly debunked by Zelenskyy — was a deep fake, a digital imitation generated by artificial intelligence (AI) to mimic his voice and facial expressions.
High-profile forgeries like this are just the tip of what is likely to be a far bigger iceberg. There is a digital deception arms race underway, in which AI models are being created that can effectively deceive online audiences, while others are being developed to detect the potentially misleading or deceptive content generated by these same models. With the growing concern regarding AI text plagiarism, one model, Grover, is designed to discern news texts written by a human from articles generated by AI.
As online trickery and misinformation surges, the armour that platforms built against it are being stripped away. Since Elon Musk's takeover of Twitter, he has trashed the platform's online safety division and as a result misinformation is back on the rise.
Musk, like others, looks to technological fixes to solve his problems. He's already signalled a plan for upping use of AI for Twitter's content moderation. But this isn't sustainable nor scalable, and is unlikely to be the silver bullet. Microsoft researcher Tarleton Gillespie suggests: “automated tools are best used to identify the bulk of the cases, leaving the less obvious or more controversial identifications to human reviewers".
Some human intervention remains in the automated decision-making systems embraced
Read more on tech.hindustantimes.com