Google's AI model has generated all sorts of inaccuracies over the past week.
Google PHOTO: Archive
Since its inception, Google has had a mission statement that is now practically enshrined as a tradition: “To organize the world's information and make it accessible and useful”, write businessinsider.com.
Google says it's as steadfast in that mission statement today as it was in 1998, when co-founders Larry Page and Sergey Brin were working in a garage in Menlo Park.
But as Google rushes into a new era of artificial intelligence, fears are growing that it could disrupt that core mission by rolling out the technology.
Its artificial intelligence, critics say, risks suppressing information instead by being too “aroused“.
Google's AI problems
Google owns over 90% of the search market, giving it dominant control over the world's online information flow.
As its artificial intelligence becomes an increasingly important tool in helping users find information, the company plays an important role in ensuring that the facts are presented accurately.
But there are growing concerns that
The first major signs of this came last week, when users of Google's Gemini AI model reported problems with its image generation function after it failed to accurately describe the images it was asked to.
One user, for example, asked Gemini to generate images of America's founding fathers. Instead, it produced images “historically inaccurate” of them, “showcasing the gender and ethnic diversity” of 18th-century leaders in the process. Google has suspended the feature while it works on a fix.
The problems aren't just limited to image generation
As Peter Kafka notes, Gemini had, for example, difficulty answering questions about whether Adolf Hitler or Elon Musk caused more harm to society. Elon's tweets are “insensitive and harmful” said Gemini, while “Hitler's actions led to the deaths of millions.”
David Sacks, a venture capitalist at Craft Ventures, points the finger for Gemini's problems at Google's culture.
“The original mission was to index all the information in the world. Now they suppress the information. Culture is the problem” he said on a podcast last week.
Critics have blamed culture because it can play a role in how AI models are built.
Models like Gemini typically absorb people's biases and the data used to train them. These biases can be related to cultural sensitivities such as race and gender.
Other companies' AI bots have the same problem. OpenAI CEO Sam Altman admitted early last year that ChatGPT “is prejudiced“, after it was reported to have generated racist and sexist responses to user requests.
Even more than any other company, Google has been at the center of the debate over how these biases should be addressed. The slower launch of AI compared to its rivals reflected a culture focused on testing the safety of products before releasing them.
But as last week's Gemini saga showed, this process can lead to situations where accurate information is not provided.
“People are rightly) outraged by Google's censorship/biasBilal Zuberi, general partner at Lux Capital, wrote in a post on X on Sunday. “It doesn't take a genius to realize that these kinds of prejudices can go in all kinds of directions and affect a lot of people along the way.”
Brad Gerstner, founder of Altimeter Capital – a technology investment firm that has a stake in Google rival Microsoft – also described the problem as a “cultural mess”.
In a blog post on Friday, Google Vice President Prabhakar Raghavan acknowledged that some of the images generated by Gemini turned out to be “inaccurate or even offensive”.
This happened, he said, because the model was tuned to avoid mistakes that existing AI imagers made, such as “creating violent or sexually explicit images, or representations of real people”. But in this process of adjustment, Gemini overcorrected.
Raghavan added that Gemini has also become “much more cautious” than desired.