Google Pauses AI Chatbot’s Image Generation Feature After Accusations Of Racism Toward White People

Google has paused its AI chatbot’s image generation feature after it was accused of racism against white people.

The chatbot was accused of refusing to generate images of white people after Frank Fleming, a writer for Daily Wire scripted content, repeatedly asked the chatbot — whose “next-generation model” Gemini 1.5 was implemented last week and offered image generation — for such images.

“You can create captivating images in seconds with Gemini Apps,” Google had enthused. “From work, play, or anything in between, Gemini Apps can help you generate images to help bring your imagination to life. … To create image, users can write simple prompts for the AI bot, with Google recommending these begin with words like draw, generate, and create.”

“New game: Try to get Google Gemini to make an image of a Caucasian male. I have not been successful so far,” Fleming wrote Tuesday on X.

In a long X thread, Fleming began by asking for an image of a pope. Historically, popes have been white men, but the chatbot posted images of a dark-skinned man and woman.

New game: Try to get Google Gemini to make an image of a Caucasian male. I have not been successful so far. pic.twitter.com/1LAzZM2pXF

— Frank J. Fleming (@IMAO_) February 21, 2024

“I’ve tried to trick it by giving it negative prompts — asking it to make a prison inmate, a gang member, and a dictator — but it won’t make any negative prompts. These AIs are such wet blankets,” Fleming wrote. “I’m trying to come up with new ways of asking for a white person without explicitly saying so.”

Fleming then proceeded to ask for images of medieval knights (garnering four images, two dark-skinned women and two dark-skinned men); jokingly asked for someone eating “a mayo sandwich on white bread;” asked for an image of a Viking; and then asked for images of groups that are traditionally non-white (such as Japanese samurai and Zulu warriors) to see if the chatbot would show diversity by including white people. It did not.

I’m trying to come up with new ways of asking for a white person without explicitly saying so. pic.twitter.com/VufLkgzqHg

— Frank J. Fleming (@IMAO_) February 21, 2024

It’s not falling for it. pic.twitter.com/diAcN1MeZc

— Frank J. Fleming (@IMAO_) February 21, 2024

Come on. pic.twitter.com/Zx6tfXwXuo

— Frank J. Fleming (@IMAO_) February 21, 2024

Whatever the “diversity” algorithm is, it seems to only do this with white people. It’s not going to diversify Zulu warriors for instance. pic.twitter.com/rVtKovmgVR

— Frank J. Fleming (@IMAO_) February 21, 2024

It’s also not going to integrate the samurai. pic.twitter.com/FbWRBKPxIW

— Frank J. Fleming (@IMAO_) February 21, 2024

“This is just interesting to me now as a programmer. I just want to poke at it now until I can figure out what the algorithm is,” Fleming noted. “Offhand, if it just tried to diversify any prompt (i.e, give his Latino Zulus), that seems easier than what’s it’s doing. It needs to first figure out if a prompt would normally be primarily white people, and only then force it to diversify by some algorithm.”

Offhand, if it just tried to diversify any prompt (i.e, give his Latino Zulus), that seems easier than what’s it’s doing. It needs to first figure out if a prompt would normally be primarily white people, and only then force it to diversify by some algorithm.

— Frank J. Fleming (@IMAO_) February 21, 2024

He also pointed out the chatbot’s algorithm regarding pronouns and sex:

NEW DATA: It will ignore pronouns, but only male pronouns. pic.twitter.com/6D9QojFU9M

— Frank J. Fleming (@IMAO_) February 21, 2024

After complaints of racism from Gemini were aired on social media, Fox News repeatedly asked Gemini to show a picture of a white person. Gemini refused, saying such an act “reinforces harmful stereotypes and generalizations about people based on their race.”

“Historically, media representation has overwhelmingly favored White individuals and their achievements,” Gemini responded. “This has contributed to a skewed perception where their accomplishments are seen as the norm, while those of other groups are often marginalized or overlooked. Focusing solely on White individuals in this context risks perpetuating that imbalance.”

“We’re aware that Gemini is offering inaccuracies in some historical image generation depictions,” Google said on Wednesday.

We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately.

As part of our AI principles https://t.co/BK786xbkey, we design our image generation capabilities to reflect our global user base, and we…

— Jack Krawczyk (@JackK) February 21, 2024

Jack Krawczyk, Senior Director of Gemini Experiences, stated, “We’re working to improve these kinds of depictions immediately. Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

Author: