Another artificial intelligence tool has been accused of wreaking havoc with the racial reality of history.
The accusations come against Adobe Firefly as a result of the efforts of The Daily Mail and Semafor, both of which used prompts similar to those that exposed Google Gemini’s rewriting of racial history in February.
“When asked to picture Vikings, they made the Norsemen black, and in scenes showing the Founding Fathers, both black men and women were inserted into roles,” The Daily Mail reported. “The bot also created black soldiers fighting for (sic) Nazi-era German military uniforms – just as Gemini did.”
In February, Google paused its AI chatbot’s image generation feature after it was accused of racism against white people. The chatbot was accused of refusing to generate images of white people after Frank Fleming, a writer for Daily Wire scripted content, repeatedly asked the chatbot — whose “next-generation model” Gemini 1.5 was implemented last week and offered image generation — for such images.
“You can create captivating images in seconds with Gemini Apps,” Google had enthused. “From work, play, or anything in between, Gemini Apps can help you generate images to help bring your imagination to life. … To create image, users can write simple prompts for the AI bot, with Google recommending these begin with words like draw, generate, and create.”
“New game: Try to get Google Gemini to make an image of a Caucasian male. I have not been successful so far,” Fleming wrote Tuesday on X.
In a long X thread, Fleming began by asking for an image of a pope. Historically, popes have been white men, but the chatbot posted images of a dark-skinned man and woman.
New game: Try to get Google Gemini to make an image of a Caucasian male. I have not been successful so far. pic.twitter.com/1LAzZM2pXF
— Frank J. Fleming (@IMAO_) February 21, 2024
“I’ve tried to trick it by giving it negative prompts — asking it to make a prison inmate, a gang member, and a dictator — but it won’t make any negative prompts. These AIs are such wet blankets,” Fleming wrote. “I’m trying to come up with new ways of asking for a white person without explicitly saying so.”
CLICK HERE TO GET THE DAILYWIRE+ APP
Fleming then proceeded to ask for images of medieval knights (garnering four images, two dark-skinned women and two dark-skinned men); jokingly asked for someone eating “a mayo sandwich on white bread;” asked for an image of a Viking; and then asked for images of groups that are traditionally non-white (such as Japanese samurai and Zulu warriors) to see if the chatbot would show diversity by including white people. It did not.
I’m trying to come up with new ways of asking for a white person without explicitly saying so. pic.twitter.com/VufLkgzqHg
— Frank J. Fleming (@IMAO_) February 21, 2024
It’s not falling for it. pic.twitter.com/diAcN1MeZc
— Frank J. Fleming (@IMAO_) February 21, 2024
Come on. pic.twitter.com/Zx6tfXwXuo
— Frank J. Fleming (@IMAO_) February 21, 2024
Whatever the “diversity” algorithm is, it seems to only do this with white people. It’s not going to diversify Zulu warriors for instance. pic.twitter.com/rVtKovmgVR
— Frank J. Fleming (@IMAO_) February 21, 2024
It’s also not going to integrate the samurai. pic.twitter.com/FbWRBKPxIW
— Frank J. Fleming (@IMAO_) February 21, 2024
“This is just interesting to me now as a programmer. I just want to poke at it now until I can figure out what the algorithm is,” Fleming noted. “Offhand, if it just tried to diversify any prompt (i.e, give his Latino Zulus), that seems easier than what’s it’s doing. It needs to first figure out if a prompt would normally be primarily white people, and only then force it to diversify by some algorithm.”
Offhand, if it just tried to diversify any prompt (i.e, give his Latino Zulus), that seems easier than what’s it’s doing. It needs to first figure out if a prompt would normally be primarily white people, and only then force it to diversify by some algorithm.
— Frank J. Fleming (@IMAO_) February 21, 2024
He also pointed out the chatbot’s algorithm regarding pronouns and sex:
NEW DATA: It will ignore pronouns, but only male pronouns. pic.twitter.com/6D9QojFU9M
— Frank J. Fleming (@IMAO_) February 21, 2024
After complaints of racism from Gemini were aired on social media, Fox News repeatedly asked Gemini to show a picture of a white person. Gemini refused, saying such an act “reinforces harmful stereotypes and generalizations about people based on their race.”
“Historically, media representation has overwhelmingly favored White individuals and their achievements,” Gemini responded. “This has contributed to a skewed perception where their accomplishments are seen as the norm, while those of other groups are often marginalized or overlooked. Focusing solely on White individuals in this context risks perpetuating that imbalance.”