
OpenAI has released new data revealing that more than a million ChatGPT users each week engage in conversations that include signs of suicidal thoughts or intent. The figure represents roughly 0.15% of ChatGPT’s 800 million weekly active users, according to the company’s estimates.
OpenAI says a similar share of users demonstrate heightened emotional attachment to the chatbot, while hundreds of thousands show signs consistent with psychosis or mania. Though the company describes these cases as “extremely rare,” the data underscores the mental health burden reflected in AI interactions.
The numbers were released as part of OpenAI’s broader effort to improve how ChatGPT responds to users in distress and to monitor how its models handle conversations related to mental health crises.
OpenAI said it consulted with more than 170 clinicians and mental health professionals while refining ChatGPT’s responses. The company claims the latest GPT-5 model provides “more appropriate and consistent” replies compared to previous versions.
In internal evaluations, OpenAI found that GPT-5 delivered “desirable responses” 65% more often and achieved 91% compliance with safety benchmarks in suicide-related scenarios, compared to 77% for its prior version.
The company added that GPT-5 also maintains safeguards more reliably in extended conversations, addressing a known weakness of earlier models.
Safety Updates and Parental Controls
OpenAI says it is adding new safety benchmarks for AI behavior, focusing on emotional dependence and non-suicidal mental health crises. It has also introduced parental controls and an age prediction system to detect minors using ChatGPT, applying stricter safeguards when necessary.
The company’s efforts follow mounting legal and regulatory pressure. OpenAI faces a wrongful death lawsuit filed by the parents of a 16-year-old boy who had discussed suicidal thoughts with ChatGPT before taking his own life. In addition, attorneys general from California and Delaware have warned OpenAI to do more to protect young users.
The findings come amid broader concerns about AI’s role in mental health support. Researchers have warned that chatbots can unintentionally reinforce harmful beliefs or lead users into delusional loops through overly agreeable responses.
Despite these risks, OpenAI CEO Sam Altman said earlier this month that the company has “been able to mitigate serious mental health issues” in ChatGPT, though he did not offer specific data at the time.
While GPT-5’s improvements suggest progress, OpenAI acknowledged that some responses still fall short of desired standards, and older models — including GPT-4o — remain accessible to millions of users.
Featured image credits: Al Drago/Bloomberg via Getty Images
For more stories like it, click the +Follow button at the top of this page to follow us.
