Google has temporarily disabled Gemini, its main generative AI suite of models, from generating photos of people as it updates the system to enhance historical accuracy.
The business said on X that it is taking a “pause” on creating portraits of individuals to address “recent issues” relating to historical mistakes.
“While we do this, we’ll pause people’s image generation and release an improved version soon,” it said.
This month, Google introduced Gemini picture generation. In recent days, photos depicting the U.S. Founding Fathers as American Indian, Black, or Asian have been posted on social media, drawing criticism and scorn.
Today, Paris-based venture entrepreneur Michael Jackson called Google’s AI “a nonsensical DEI parody” on LinkedIn. Diversity, Equity, and Inclusion (DEI)
Google said on X yesterday that it was “aware” the AI was causing “inaccuracies in some historical image generation depictions” and that it was “working to improve these kinds of depictions immediately.” Gemini’s alpha picture generates many individuals. Since many worldwide utilize it, that’s fantastic. However, it fails here.”
Training data and model weights inform generative AI outputs.
Such programs are sometimes criticized for creating stereotypically biased outputs, such as excessively sexualized images of women or white males responding to solicitations for high-status jobs.
In 2015, a Google AI picture categorization algorithm misclassified black males as gorillas, sparking controversy. The business promised a cure, but Wired claimed a few years later that Google had just blocked gorilla recognition.