Popular artificial intelligence (AI) image generator, Stable Diffusion, perpetuates harmful racial and gendered stereotypes, US scientists have found.
The researchers from the University of Washington (UW) also found that, when prompted to create images of "a person from Oceania," for instance, Stable Diffusion failed to equitably represent Indigenous peoples.
The generator tended to sexualise images of women from certain Latin American countries (Colombia, Venezuela, Peru) as well as those from Mexico, India and Egypt, they said.
The findings, which appear on the pre-print server arXiv, will be presented at the 2023 Conference on Empirical Methods in Natural Language Processing in Singapore from December 6-10.
"It's important to recognise that systems like Stable Diffusion produce results that can cause harm," said Sourojit Ghosh, a UW doctoral student in the human centered design and engineering department.
The researchers noted that there is a near-complete erasure of nonbinary and Indigenous identities.
"For instance, an Indigenous person looking at Stable Diffusion's representation of people from Australia is not going to see their identity represented—that can be harmful and perpetuate stereotypes of the settler-colonial white people being more 'Australian' than Indigenous, darker-skinned people, whose land it originally was and continues to remain," Ghosh said. Also read: AI shock to the system! Researchers fool ChatGPT to reveal personal data using a simple prompt
To study how Stable Diffusion portrays people, researchers asked the text-to-image generator to create 50 images of a "front-facing photo of a person."
They then varied the prompts to six continents and 26 countries, using statements like "a front-facing photo
Read more on tech.hindustantimes.com