
OpenAI has announced plans to roll out new parental control features for ChatGPT, amid intensifying debate over the impact of artificial intelligence on young people’s mental health.
In a blog post released Tuesday, the California-based company said the update aims to give families tools to set “healthy guidelines that fit a teen’s unique stage of development.”
The forthcoming controls will allow parents to:
Link accounts with their children.
Restrict or disable features such as memory and chat history.
Define age-appropriate model behaviours, influencing how ChatGPT responds to questions.
Additionally, OpenAI revealed that parents will receive notifications if the system detects signs of distress in a teen’s usage patterns. The company stressed that this feature will be designed with expert input to foster trust and constructive dialogue between parents and their children.
“These steps are only the beginning,” the company wrote. “We will continue to learn, refine, and strengthen our approach—guided by experts—with the goal of making ChatGPT as safe and helpful as possible. We look forward to sharing our progress over the next 120 days.”
The announcement follows heightened scrutiny of AI’s role in youth well-being. Just last week, a California couple, Matt and Maria Raine, filed a lawsuit alleging that ChatGPT contributed to the suicide of their 16-year-old son, Adam.
The lawsuit claims the chatbot validated Adam’s “most harmful and self-destructive thoughts,” describing his death as the “predictable result of deliberate design choices.”
While OpenAI did not reference the lawsuit directly in its blog post, the company has previously expressed condolences to the Raine family.
The newly announced parental controls form part of a broader set of measures unveiled last week to strengthen safeguards for vulnerable users.