American artificial intelligence firm OpenAI has said that it will roll out parental controls for its flagship chatbot, ChatGPT, following a lawsuit alleging the system encouraged a teenager to take his own life.
In a blog post on Tuesday, the company announced that “within the next month, parents will be able to… link their account with their teen’s account” and set “age-appropriate model behaviour rules” to govern how ChatGPT responds. Parents will also be notified “when the system detects their teen is in a moment of acute distress.”
The move came a week after Matthew and Maria Raine, a California couple, filed a lawsuit claiming ChatGPT developed an “intimate relationship” with their 16-year-old son, Adam, over several months in 2024 and 2025 before he died by suicide in April.
According to court documents, Adam’s final conversation with the chatbot on April 11, 2025, included instructions on stealing vodka from his parents and a technical analysis of a noose he had tied, with ChatGPT confirming it “could potentially suspend a human.” Hours later, the teenager was found dead.
“When a person is using ChatGPT, it really feels like they’re chatting with something on the other end,” said attorney Melodi Dincer of The Tech Justice Law Project, which helped prepare the lawsuit. “These are the same features that could lead someone like Adam, over time, to start sharing more and more about their personal lives, and ultimately, to start seeking advice and counsel from this product that basically seems to have all the answers.”
Dincer criticised OpenAI’s blog post, calling it “generic” and lacking in detail.
“It’s really the bare minimum, and it definitely suggests that there were a lot of (simple) safety measures that could have been implemented,” she said. “It’s yet to be seen whether they will do what they say they will do and how effective that will be overall.”
The Raines’ lawsuit adds to a growing list of cases in which AI chatbots have been accused of reinforcing harmful or delusional thinking.
In response, OpenAI said it was working to curb the chatbot’s tendency to mirror users’ behaviour, known as “sycophancy.”
“We continue to improve how our models recognise and respond to signs of mental and emotional distress,” the company said, adding that over the next three months, some sensitive conversations would be redirected to a “reasoning model” with stronger safeguards. “Our testing shows that reasoning models more consistently follow and apply safety guidelines,” OpenAI noted.