Google is putting people at risk by downplaying safety warnings that AI-generated medical advice could be wrong.
When answering questions about sensitive topics like health, the company claims that AI Overviews — the AI-generated summaries that appear above search results — encourage users to seek expert help instead of relying solely on these summaries. “AI Overviews will let people know when it’s important to seek expert advice or verify the information presented”Google sent, according to The Guardian.
But an analysis by The Guardian found that the firm does not include such warnings when users are initially shown medical information.
Google only displays a warning if users choose to request additional health information and click the button “show more” (“Show more”). Even then, the safety labels only appear after all additional AI-generated medical information and are displayed in a smaller, lighter font.
“This information is for informational purposes only”shows the warning displayed to those who access the additional details after reading the initial summary and reaching the end of the AI Overview section. “For medical advice or a diagnosis, consult a specialist. AI responses may contain errors.”
Google did not deny that the warnings do not appear when users first receive medical information, nor that they are displayed below the AI summaries and in a smaller, lighter font. AI Overviews “encourage people to seek advice from a medical professional” and frequently mentions the need to seek medical attention directly in the summary “when appropriate”a company spokesperson said.
Artificial intelligence experts and patient advocates, who were briefed on the Guardian’s findings, said they were concerned. Warnings play a critical role, they say, and should be prominently displayed from the moment users receive medical information.
“The absence of warnings when users are initially provided with medical information creates several major risks”said Pat Pataranutaporn, assistant professor, technologist and researcher at the Massachusetts Institute of Technology (MIT), a world-renowned expert in AI and human-computer interaction.
Gina Neff, Professor of Responsible Artificial Intelligence at Queen Mary University of London, stated that “the problem of wrong AI summaries is one of design” and that responsibility belongs to Google. “AI Overviews are designed for speed, not accuracy, and this leads to errors in health information, which can be dangerous.”
Sonali Sharma, a researcher at Stanford University’s Center for AI in Medicine and Imaging (AIMI), said: “The big problem is that these AI Overviews from Google appear right at the top of the search page and often provide what appears to be a complete answer to the user’s question at a time when they are trying to get information and an answer as quickly as possible.
For many people, the fact that that single summary appears immediately creates a sense of safety that discourages further searches or scrolling through the entire summary and going to the “Show more” option, where a warning might appear. What I think can lead to real consequences is that AI Overviews can contain information that is partly correct and partly incorrect, and it becomes very difficult to distinguish what is accurate and what is not, unless you are already familiar with the subject.”
A Google spokesperson said: “It is incorrect to suggest that AI Overviews does not encourage people to seek the advice of a medical professional. In addition to a clear warning, AI Overviews frequently mentions the need to seek medical attention directly within the summary when appropriate.”
Tom Bishop, director of patient information at blood cancer charity Anthony Nolan, called for urgent action. “We know misinformation is a real problem, but when it comes to health misinformation, it can be really dangerous”Bishop said.
“That warning needs to be much more visible so that people stop for a moment and ask themselves, ‘Is this something I need to check with my medical team before I act? Can I take this information for granted, or do I need to take a closer look and see how it applies to my specific medical situation?’ Because that’s where the key is.”
He added: “I’d like this warning to be right at the top. Be the first thing you see. And ideally be the same font and size as the rest of the information displayed, not something small and easy to overlook.”