Did you hear? Google has been accused of having a secret vendetta against White people. Elon Musk exchanged tweets about the conspiracy on X more than 150 times over the past week, all regarding portraits generated with Google's new AI chatbot Gemini. Ben Shapiro, The New York Post and Musk were driven apoplectic over how diverse the images were: Female popes! Black Nazis! Indigenous founding fathers! Google apologized and has paused the feature.
In reality, the issue is that the company did a shoddy job overcorrecting on tech that used to skew racist. No, its Chief Executive Officer Sundar Pichai hasn't been infected by the woke mind virus. Rather, he's too obsessed with growth and is neglecting the proper checks on his products.
Three years ago, Google got in trouble when its photo-tagging tool started labelling some Black people as apes. It shut the feature down, and then made the problem worse by firing two of its leading AI ethics researchers. These were the people whose job was to make sure that Google's technology was fair in how it depicted women and minorities. Not overly diverse like the new Gemini, but equitable and balanced.
When Gemini started producing images of German World War II soldiers who were Black and Asian this week, it was a sign that the ethics team hadn't become more powerful, as Musk and others suggest, but that it was being ignored amid Google's race against Microsoft Corp. and OpenAI to dominate generative web search. Proper investment would have led to a smarter approach to diversity in image generation, but Google was neglecting that work.
The signs have been there for the past year. People who test artificial intelligence systems for safety are outnumbered by those whose job is to make it bigger and more capable by 30-to-1, according to an estimate from the Center for Humane Technology. Often they are shouting into a void and told not to get in the way. Google's earlier chatbot Bard was so faulty that it made factual errors in its
Read more on tech.hindustantimes.com