Google Admits Biased AI Program Offended Users

( – Google’s chief has officially spoken out about the controversial AI program named Gemini and the biased results that people received when using it.

Google’s chief has said that some of the results people were getting were “biased” and “unacceptable” admitting that the Gemini results were not okay. Gemini, Google’s newest AI program, has given controversial results to users’ searches such as portraying German WWII soldiers as people of color.

Sundar Pichai said in a statement that the results of Google’s Gemini had upset many. He wrote in an email, “I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong.”

“Our teams have been working around the clock to address these issues. We’re already seeing a substantial improvement on a wide range of prompts,” Pichai continued.

There have been many posts on social media showing the results that people were getting when using the AI program. The program generated numerous photos of historical figures like the Pope, founding fathers, and Vikings of the wrong ethnicity and/or gender. This proves that as AI continues there will be a need to ensure more accuracy.

This isn’t the first time that we’ve seen AI programs portray people and images inaccurately. Some AI systems have also produced biased results; OpenAI’s image generator would show a judge as a white person but a gunman as a black person.

Pichai did say that they would be making changes and reevaluating Gemini in order to make the program more efficient and reliable with results.

There’s been a huge development and interest in AI as technology has advanced and many companies are implementing their own AI programs. This has led to multiple companies revising their programming and ensuring unbiased and accurate results.

Copyright 2024,