Fans of Taylor Swift and politicians expressed outrage on Friday at AI-generated fake images that went viral on X and were still available on other platforms. One image of the US megastar was seen 47 million times on X, the former Twitter, before it was removed Thursday. According to US media, the post was live on the platform for around 17 hours.
Deepfake images of celebrities are not new but activists and regulators are worried that easy-to-use tools employing generative artificial intelligence (AI) will create an uncontrollable flood of toxic or harmful content.
But the targeting of Swift, the second most listened-to artist in the world on Spotify (after Canadian rapper Drake), could shine a new light on the phenomenon with her legions of fans outraged at the development.
"The only 'silver lining' about it happening to Taylor Swift is that she likely has enough power to get legislation passed to eliminate it. You people are sick," wrote influencer Danisha Carter on X.
X is one of the biggest platforms for porn content in the world, analysts say, as its policies on nudity are looser than Meta-owned platforms Facebook or Instagram.
This has been tolerated by Apple and Google, the gatekeepers for online content through the guidelines they set for their app stores on iPhones and Android smartphones.
In a statement, X said that "posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content."
The Elon Musk-owned platform said that it was "actively removing all identified images and taking appropriate actions against the accounts responsible for posting them."
It was also "closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed."
Swift's representatives did not immediately respond to a request for comment.
"What's happened to Taylor Swift is nothing new. For years, women have been targets of deepfakes without their consent," said Yvette Clarke, a
Read more on tech.hindustantimes.com