Every week, over a million people express suicidal intent when chatting with ChatGPT. OpenAI estimates

Over a million ChatGPT users send messages every week that include “explicit indicators of possible suicidal plans or intentions”according to an article published Monday on the OpenAI blog.

According to OpenAI, GPT-5 has expanded access to emergency lines. PHOTO Shutterstock

The figures announced are one of the AI ​​giant’s most direct statements yet about how AI can exacerbate mental health issues, according to The Guardian.

The finding, which is part of an update on how the chatbot handles sensitive conversations, is one of OpenAI’s most direct statements about the extent to which AI can exacerbate mental health issues.

“Mental health emergencies related to psychosis or mania”

In addition to the estimates of suicidal ideation and related interactions, OpenAI also stated that approximately 0.07% of active users in a given week—about 560,000 of the 800 million weekly users—exhibit “possible signs of mental health emergencies related to psychosis or mania”.

The post cautioned that these conversations were difficult to detect or measure, and that this was an initial analysis.

A teenager committed suicide after an intense interaction with ChatGPT

Artificial intelligence giant OpenAI is facing heightened scrutiny following a highly publicized lawsuit filed by the family of a teenager who committed suicide after an intense interaction with ChatGPT.

The Federal Trade Commission in September launched a broad investigation into companies that create AI chatbots, including OpenAI, to find out how they measure the negative impact on children and teenagers.

More damaging responses than before

The conclusions of the tests were that “update” ChatGPT offers more harmful responses than before.

But OpenAI said the recent GPT-5 update reduced its product’s unwanted behaviors and improved user safety in a model evaluation involving more than 1,000 conversations about self-harm and suicide.

“Our new automated evaluations give the new GPT-5 a score of 91% compliance with desired behaviors, compared to 77% for the previous GPT-5”the company’s post said.

According to OpenAI, GPT-5 expanded access to emergency phone lines and added reminders for users to take breaks during long sessions.

The giant said that to improve the model, it recruited 170 doctors from its global network of health experts to assist it in research over the past few months, which included assessing the safety of its model’s responses and helping to draft the chatbot’s responses to mental health questions.

“As part of this work, psychiatrists and psychologists analyzed more than 1,800 model responses related to serious mental health situations and compared the responses of the new GPT-5 chat model to those of previous models,” stated OpenAI.

CONCERN

For some time, AI researchers and public health advocates have been concerned about the tendency of chatbots to confirm users’ decisions or delusions, regardless of whether they may be harmful, a problem known as “sycophancy.”

Experts also worry about people using AI chatbots for psychological support and warn that this could harm vulnerable users.

The language used in OpenAI’s post distances the company from any potential causal link between its product and the mental health crises experienced by its users, The Guardian notes.