Remember Cambridge Analytica? The British political consultancy operated from 2013-18 and had one mission: to hoover-up the data of Facebook users without their knowledge, then use this personal information to tailor political ads that would in theory sway their voting intentions (for an exorbitant price, of course). When the scandal broke, it was accused of meddling in everything from the UK's Brexit referendum to Donald Trump's 2015 Presidential campaign.
Well, AI is going to make that kind of stuff seem like pattycake. In a year that will see a presidential election in the US and a general election in the UK, the tech has now reached a stage where it can deepfake candidates, mimic their voices, craft political messages from prompts and personalise them to individual users, and this is barely scratching the surface. When you can watch Biden and Trump arguing over ranking the Xenoblade games, it's inconceivable that the technology will stay out of these elections or, indeed, any other.
This is a serious problem. Online discourse is bad enough as it is, with partisans on either side willing to believe anything of the other, and misinformation already rife. The addition of outright fabricated content and AI targeting (among other things) to this mix is potentially explosive as well as disastrous. And OpenAI, the most high-profile company in the field, knows that it may be heading into choppy waters: but while it seems good enough at identifying the issues, it's unclear whether it'll be able to actually get a handle on them.
OpenAI says it's all about «protecting the integrity of elections» and it wants «to make sure our technology is not used in a way that could undermine this process.» There's some bumpf about all the positives AI brings, how unprecedented it is, yadda yadda yadda, then we get to the crux of the matter: «potential abuse».
The company lists some of these problems as «misleading 'deepfakes', scaled influence operations, or chatbots impersonating
Read more on pcgamer.com