A California couple sues the Openai company after its chatbot, Chatgpt, encouraged to commit suicide. It is the first action in the court that accuses the open “death”.
A California couple sues Openai/Photo: Pixabay
The trial was filed on Tuesday, at the California Superior Court, by Matt and Maria Raine, the parents of Adam Raine, a 16 -year -old boy, writes the BBC.
The family included in the file the conversations between Adam, who died in April, and Chatgpt, who shows that the young man was talking about suicidal thoughts. They claim that the program validated “The most harmful and self -destructive thoughts ”.
The complaint, obtained by the BBC, accuses the openness of negligence and death. The family requires damage but also “Prohibition measures to prevent such things from repeating.”
According to the file, Adam Raine began using Chatgpt in September 2024 for school themes. It also used it to explore interests such as music and bands drawn, but also to receive advice on university studies.
In a few months, “Chatgpt became the closest adolescent confidant“, The process shows, and he began to talk about anxiety and emotional problems. By January 2025, the family claims that Adam already discussed with Chatgpt about suicide methods.
The trial also shows that the boy uploaded photos with himself, presenting signs of self -help. The program would be “Recognized a medical emergency, but it continued the conversation. ”
The latest conversations show, according to the family, that Adam has described his plan to end his life. Chatgpt would have answered: “Thank you for being honest in this regard. You do not have to sweeten it with me – I know what you ask for and I will not turn my eyes. ” The same day, his mother found him without life.
The family claims that their son’s interaction with chatgpt and his death “They were a predictable result of deliberate design decisions.”
They accuse Openai of designed the program “To develop psychological dependence on users” And that he launched GPT-4o, the version used by Adam, without adequate safety tests.
How Openai responds
The trial includes the defendant Sam Altman, co -founder and CEO Openai, as well as employees, managers and engineers involved in Chatgpt.
In the public note sent on Tuesday, Openai said that its goal is “to really help users“And not”to capture their attention ”.
“We express our deepest condolences to the Raine family during this difficult period“, The company said.
At the same time, Openai published a note on its website on Tuesday that “states that”The recent torn cases of people who used chatgpt in the middle of acute crises pushes us hard“The company added that”Chatgpt is trained to guide people to professional help”.
Openai acknowledged that “There were times when our systems did not behave as expected in sensitive situations.”
The process of the Raine family is not the first time there are concerns about the connection between AI and Mental Health. Last week, the New York Times, journalist Laura Reiley published an essay about her daughter, Sophie, who turned to Chatgpt before taking her life.
Reiley said “The inclination of the program to agree ” With users helped her daughter hide her severe mental health crisis towards family and friends.
“AI fed Sophie’s impulse to hide what’s worse, to pretend to be better than in reality, to protect others from its full agony“She wrote. The author asked companies to find better ways to connect users to the right resources.
In response to the essay, an Openai spokesman said that the company develops automatic tools to detect and manage users who go through mental or emotional suffering.