Google refused to guarantee the impartiality of its Gemini AI

They say this is normal for the technology at this stage. Once Google restores its large language model and brings Gemini back online, AI tools may remain unreliable, especially when creating images or text about current events, developing news or hot-button political issues. DiscussGoogle refused to guarantee the impartiality of its Gemini AI©

“It will make mistakes,” Google said in a publication last week. “As we said initially, hallucinations are a known problem in all large language models. There are times when the AI ​​is simply wrong. This is something we are constantly working to improve.”

Prabhakar Raghavan, Google's senior vice president of knowledge and information, explained why the company was forced to shut down Gemini's AI image generation feature after three weeks. after launch to “fix it.”

In simple terms, Google's AI engine took users' text queries and created images that were clearly biased toward a particular sociopolitical view.

For example, users' text queries for images of Nazis created black and Asian Nazis. Asked to draw an image of the Pope, Gemini responded by creating an Asian female Pope and a black Pope.

A request to create an image of a medieval knight resulted in images of Asian, black and female knights.