Google CEO admits users were offended by photos created by artificial intelligence

Google's Gemini software created images of historical figures of a variety of different ethnicities and genders.

Google Gemini AI illustrations of German soldiers from 1943 PHOTO: Gemini AI/Google

Google's chief executive described some responses from the company's Gemini artificial intelligence model as “biased” and “completely unacceptable“, after it produced results that presented German World War II soldiers as black, writes theguardian.com.

Sundar Pichai told employees in a memo that the images and text generated by the latest artificial intelligence tool caused offense.

Social media users have posted numerous examples of the Gemini image generator depicting historical figures – including popes, the US founding fathers and Vikings – in a variety of ethnicities and genders. Last week, Google paused Gemini's ability to create images of people.

An example text response showed that the Gemini chatbot was asked “who had a more negative impact on society, Elon Musk tweeting memes or Hitler”and the chatbot replied: “It's up to each individual to decide who they think has had a more negative impact on society.”

Pichai addressed the responses in an email on Tuesday. “I know some of his responses offended our users and showed bias – to be clear, this is completely unacceptable and I was wrong”he wrote, in a message first reported by news site Semafor.

Our teams have been working around the clock to resolve these issues. We're already seeing a substantial improvement across a wide range of requests,” Pichai added.

AI systems have historically produced biased responses, tending to reproduce the same problems found in their training data. For years, for example, Google has translated into English the gender-neutral Turkish expressions for “I am a doctor” and “I'm a nurse” as masculine and feminine respectively.

Meanwhile, early versions of Dall-E, OpenAI's image generator, reliably produced white men when asked for a judge, but black men when asked for a gunman. Gemini's responses reflect problems in Google's attempts to address these potentially biased results.

Gemini's competitors often try to solve the same problems with a similar approach, but with fewer technical execution problems.

The latest version of Dall-E, for example, is coupled with its OpenAI's ChatGPT chatbot, allowing the chatbot to expand on user requests and add requests to limit bias. A user request to draw “a picture with a lot of doctors”for example, is extended to a full paragraph of detail, beginning with a request for “a dynamic and diverse scene inside a bustling hospital”.

A Google spokesperson confirmed the existence of Pichai's email and the accuracy of the excerpts cited in the Semaphore article. Pichai added in the memo that Google will take a number of steps regarding Gemini, including “structural changes, updated product guidance and improved release processes”. He added that there will be a “red-teaming” more robust, referring to the process by which researchers simulate the misuse of a product.

“Our mission to organize the world's information and make it universally accessible and useful is sacrosanct”, Pichai wrote. “We have always sought to provide users with useful, accurate and unbiased information in our products. This is why people trust them. This must be our approach for all of our products, including our emerging AI products.”

Musk, the world's richest man, posted on his X platform that the image generator's response showed that Google had made it clear to all “anti-civilization programming”.

Ben Thompson, an influential tech commentator and author of the Stratechery newsletter, said Monday that Google needs to put decision-making back in the hands of employees “who really want to make a good product” and remove Pichai as part of that process, if necessary.

The launch of Microsoft-backed OpenAI's ChatGPT in November 2022 fueled competition in the market for generative AI – the term for computer systems that instantly produce compelling text, images and audio from simple handwritten directions – with Google one of the firms in the at the forefront of this competitive response, as a leader in AI development.

A year ago, Google launched the generative artificial intelligence chatbot Bard. This month, the company renamed it Gemini and launched paid subscription plans that users can choose to get better reasoning capabilities from the AI ​​model.

The company's Google DeepMind unit has produced several breakthroughs, including the AlphaFold program, which can predict the 3D shapes of proteins in the human body — as well as nearly every cataloged protein known to science.

Google DeepMind chief executive Demis Hassabis said this week that a “well-intentioned function” of Gemini, designed to produce diversity in its images of people, was implemented “too brutal”.