
Following a lawsuit accusing ChatGPT of contributing to a 16-year-old’s suicide, OpenAI has announced new parental control features aimed at making its AI safer for teens.
Matt and Maria Raine, the parents of 16-year-old Adam, filed the lawsuit last week in California, alleging that ChatGPT “validated Adam’s most harmful and self-destructive thoughts,” calling his death a “predictable result of deliberate design choices.”
In response, OpenAI says it is introducing tools designed to give families more control over how teens interact with ChatGPT. Parents will be able to link their accounts with their children’s, disable chat history and memory features, and enforce “age-appropriate” behavior settings to shape how the AI responds.
A key addition is a planned alert system that can notify parents if their teen shows signs of emotional distress while using ChatGPT. OpenAI says it will work closely with mental health experts to ensure the feature supports trust and healthy communication between parents and teens.
“These steps are only the beginning,” the company said in a blog post. “We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful and safe as possible. We look forward to sharing our progress over the coming 120 days”.
The parental controls are expected to roll out within the next month and form part of a broader initiative to enhance safety for vulnerable users. OpenAI has emphasized that this is just the start of its efforts.