Bumble, a leading dating app, is taking steps to ensure the authenticity of its users by targeting AI-generated profile photos. This move aims to maintain genuine connections and prevent users from being misled by digitally crafted images. The company has introduced a feature that allows users to report profiles suspected of using AI to fake their photos. This initiative comes in response to a Bumble survey, which revealed that 71% of Gen Z and Millennials want restrictions on AI-generated images and bios, considering it catfishing when users post photos of places or activities they haven't actually experienced.
Also Read: Instagram rolls out new multi-audio features for Reels, users can now
Risa Stein, Bumble's VP of Product, emphasized the importance of creating a safe and trustworthy dating environment. "Creating a space for meaningful connections means removing anything misleading or dangerous. We're committed to continually improving our technology to make Bumble a safe and trusted dating environment," Stein said. The new feature aligns with Bumble's broader commitment to authenticity and user safety.
In addition to the anti-AI photo feature, Bumble has implemented several other AI tools to enhance user safety and authenticity. One such tool is the Spam and Scam Detector, launched earlier this year. This tool has significantly reduced the prevalence of spam, scams, and fake profiles, with reports of such profiles dropping by 45% within two months of its launch. This decrease means fewer encounters with fake identities and deceptive profiles on the platform.
Also Read: GTA 6 price expectations revealed: What fans can anticipate for base
Another notable feature is the Nude Image Blur, designed to protect users from unwanted explicit photos. This tool blurs potential nude images and alerts users before they view them, allowing them to block or report the sender if necessary. This feature acts as a safeguard, ensuring a more respectful and secure communication
Read more on tech.hindustantimes.com