Back in 1996, at age 10, I played a computer game at a friend’s house called Spycraft: The Great Game. In the game, you play as a CIA operative investigating an assassination plot; to mislead a suspect during an interrogation, you have the option to doctor a photograph. The process blew my 10-year-old mind — so much so that I’ve remembered how powerful that minigame felt, all these years. Although it was blurry and pixelated, the photo editor that appeared in Spycraft was a bit like what Adobe Photoshop would one day become. In 1996, it felt like the stuff of high-tech espionage and trickery. In 2023, it’s utterly mundane. It isn’t difficult or expensive to alter a photograph — not anymore. Anyone can do it, and as a result, we have all come to accept that we cannot trust any image we see.
Deepfake technology has already proven that we can’t trust video or audio recordings, either. And the prevalence of generative artificial intelligence has only made creating such deepfakes easier. We all need to get used to this new reality — and fast.
Genna Bain, the wife of the now-deceased YouTuber John “TotalBiscuit” Bain, posted on Twitter last week about a new concern she faces thanks to the advancements of AI tech: “Today was fun. Being faced with making a choice of scrubbing all of my late husband’s lifetime of content from the internet. Apparently people think it’s okay to use his library to train voice AIs to promote their social commentary and political views.” In response, she received sympathy and pleas from her husband’s fans to preserve his online legacy.
But here’s the problem. There’s no practical way that Genna Bain, or anyone else in her position, could adequately prevent anyone from creating a deepfake video or
Read more on polygon.com