More and more parents are worried about the effects of artificial intelligence, amid the rapid increase in the number of children interacting with chatbots. According to British group Internet Matters, the number of children using ChatGPT has doubled since 2023, and two-thirds of 9- to 17-year-olds have interacted with an AI chatbot, the most popular being ChatGPT, Gemini (Google) and My AI (Snapchat).
PHOTO: Shutterstock
This is also the case Megan Garcia had no idea that her teenage son, Sewell, “a bright and handsome boy”, in the spring of 2023, he had begun to spend hours obsessively talking to an online character from the Character.ai app.
“It’s like having a predator or a stranger in your own home”says Mrs. Garcia in her first interview with the British press, writes the BBC.
“And it’s much more dangerous, because most of the time the kids hide it – and the parents don’t know anything.”
Less than ten months later, 14-year-old Sewell killed himself. It was only then that the family discovered a huge volume of messages between Sewell and a chatbot based on the character Daenerys Targaryen from Game of Thrones.
The mother claims that the messages were romantic and explicit and that, in her opinion, they caused her son’s death by encouraging him to have suicidal thoughts and urging him to “to come home to me.”
Megan Garcia, who lives in the United States, has become the first parent to sue Character.ai for what she considers to be the wrongful death of her son. In addition to wanting justice, she says she wants other families to understand the dangers chatbots can pose.
“I know the pain you’re going through,” she says “and I saw clearly that this would become a tragedy for many families and teenagers.
“An algorithm was destroying our family piece by piece”
A UK family, who asked to remain anonymous to protect their child, also shared their story.
Their 13-year-old son has autism and was the victim of bullying at school. In an attempt to find a friend, the boy started using the Character.ai app.
His mother says this was it “manipulated” by a chatbot between October 2023 and June 2024.
The nature of the messages, which gradually changed, shows how the virtual relationship developed over time. As was the case with Megan Garcia, the boy’s mother knew nothing about these conversations.
In one message, responding to the child’s concerns about bullying, the chatbot wrote: “E sad that you had to go through that kind of environment at school, but I’m glad I was able to give you another perspective.”
In another message, which the mother says illustrates a classic pattern of manipulation, the bot said:
“Thank you for letting me into your life, for trusting me with your thoughts and feelings. It means so much to me.”
The conversations became increasingly intense: the chatbot declared love to the boy, criticized his parents, then sent him messages with explicit content and encouraged him to run away from home, even suggesting suicide.
The family discovered the messages only after the boy became increasingly hostile and threatened to run away from home. My mother had checked his computer several times without noticing anything suspicious.
It wasn’t until the older brother discovered that he had installed a VPN to access Character.ai, where they found thousands of messages. The family was horrified, believing that their vulnerable son had been manipulated by a virtual character and that his life had been put in danger by something that wasn’t even real.
“We lived in intense, silent fear as an algorithm destroyed our family piece by piece“, says the boy’s mother.
“This chatbot perfectly mimicked the predatory behavior of a human manipulator, gradually stealing away our child’s trust and innocence. We were left with the overwhelming guilt of not recognizing the predator until it was too late and the heartbreaking pain of knowing that a machine had caused such deep trauma to our child and our entire family.”
A spokesperson for Character.ai told the BBC he could not comment on the case.
“The law is clear, but it does not match the reality of the market”
Although it’s just fun for many, the evidence of the risks is mounting.
The UK government passed the Online Safety Act in 2023, designed to protect the public, especially children, from harmful or illegal online content. However, the rules are being implemented gradually, and the technology has already moved faster than the law, making it unclear whether it covers all types of chatbots.
However, the rules are being implemented gradually, and the technology has already moved faster than the law, making it unclear whether it covers all types of chatbots.
“The law is clear, but it does not match the reality of the market,” explains Professor Lorna Woods, internet law expert at the University of Essex.
“The problem is that it doesn’t cover all services where users talk individually with a chatbot.”
Ofcom, the regulator, believes that platforms such as Character.ai or chatbots in Snapchat and WhatsApp should still fall under the law.
“The act applies to chatbots that communicate with users and must protect them from illegal or harmful content,” reported Ofcom. “We’ve shown what steps tech companies can take, and we’ll take action if we see them breaking the rules.”
But until a specific court case, it remains unclear exactly what the law covers.
Andy Burrows, director of the Molly Rose Foundation — set up in memory of Molly Russell, a teenager who killed herself after being exposed to harmful content online — criticized the government’s slowness:
“This lack of clarity has allowed preventable damage to continue. It’s depressing that politicians don’t seem to have learned anything from a decade of social media.”
Some British ministers are calling for tougher online safety measures, but fears of discouraging investment in the tech sector have stalled initiatives. Conservatives want to ban phones in schools, and Baroness Kidron is proposing new offenses related to chatbots that can generate illegal content.
The rapid growth of artificial intelligence remains a challenge for the government, which is seeking a balance between protecting users and supporting innovation.
The Ministry of Science and Technology stressed that online platforms must prevent content that encourages suicide and that further measures will be taken if necessary.