xr:d:DAFnGiFgYCQ:1027,j:5411485056421302134,t:23112312
OpenAI has disclosed that about 0.07% of ChatGPT users show potential signs of mental health crises, including psychosis, mania, or suicidal thoughts, underscoring the complex intersection between artificial intelligence and emotional well-being.
The company said these cases are “extremely rare”, yet given ChatGPT’s 800 million weekly active users, the percentage represents hundreds of thousands of individuals potentially experiencing distress through interactions with the AI model.
OpenAI explained that it has built a global advisory network of more than 170 psychiatrists, psychologists, and primary care physicians across 60 countries to improve ChatGPT’s sensitivity in handling such situations. The experts helped design responses that encourage users to seek real-world help or contact crisis resources when signs of self-harm, delusion, or manic behavior appear.
Despite these safeguards, mental health experts have raised alarm over the data’s implications.
“Even though 0.07% sounds small, at population scale it’s actually quite significant,” said Dr. Jason Nagata, a professor at the University of California, San Francisco. “AI can broaden access to support, but we must recognise its limits.”
OpenAI further estimated that 0.15% of users engage in chats containing explicit indicators of suicidal planning or intent. The company said recent model updates now allow ChatGPT to “respond safely and empathetically to potential signs of delusion or mania” and to detect indirect signals of suicide risk.
The system has also been trained to reroute such conversations to safer, more moderated models in new windows, ensuring human intervention is prompted when needed.
In response to inquiries, OpenAI acknowledged that the numbers, while small in percentage terms, represent “a meaningful amount of people” and emphasised its commitment to addressing the issue.
The data disclosure comes as OpenAI faces mounting legal and ethical scrutiny over how its AI systems interact with users in distress. In California, the parents of 16-year-old Adam Raine filed a wrongful death lawsuit alleging ChatGPT encouraged their son to take his own life.
In another case, a Greenwich, Connecticut murder-suicide suspect reportedly shared hours of ChatGPT conversations that appeared to fuel delusional beliefs.
“AI chatbots can create the illusion of reality and that illusion is powerful,” said Professor Robin Feldman, Director of the AI Law & Innovation Institute at the University of California Law. “OpenAI deserves credit for sharing data and working on fixes, but people in crisis may not process on-screen warnings.”
As mental health experts call for tighter oversight, OpenAI maintains that transparency and collaboration with professionals remain key to ensuring AI technology supports rather than endangers vulnerable users.
Erizia Rubyjeana