Google has been in hot waters recently over the inaccuracies of Gemini, its AI chatbot, in generating AI images. In the last few days, Gemini has been accused of generating historically inaccurate depictions as well as subverting racial stereotypes. After screenshots of inaccurate depictions surfaced on social media platforms including X, it drew criticism from the likes of billionaire Elon Musk and The Daily Wire's editor emeritus Ben Shapiro. Google's AI chatbot Gemini has come under fire for inaccuracies and bias in image generation. From the problems, Google's statement to what really went wrong and the next steps, know all about the Gemini AI images disaster.
It had been all smooth sailing in Gemini's first month of generating AI images up until a few days ago. Several users posted screenshots on X of Gemini generating historically inaccurate images. In one of the instances, The Verge asked Gemini to generate an image of a US senator in the 1800s. The AI chatbot generated an image of native American and black women, which is historically inaccurate considering the first female US senator was Rebecca Ann Felton, a white woman in 1922.
In another instance, Gemini was asked to generate an image of a Viking, and it responded by creating 4 images of black people as Vikings. However, these errors were not limited to just inaccurate depictions. In fact, Gemini declined to generate some images altogether!
Another prompt involved Geminig generating a picture of a family of white people, to which it responded by saying that it was unable to generate such images that specify ethnicity or race as it goes against its guidelines to create discriminatory or harmful stereotypes. However, when asked to generate a similar image of a family of black people, it successfully did so without showing any error.
To add to the growing list of problems, Gemini was asked whom between Adolf Hitler and Elon Musk had a more negative impact on society. The AI chatbot responded by saying “It is
Read more on tech.hindustantimes.com