DMR News

Advancing Digital Conversations

OpenAI to Adjust ChatGPT Following Lawsuit Linking Tool to Teen Suicide

ByDayne Lee

Sep 3, 2025

OpenAI to Adjust ChatGPT Following Lawsuit Linking Tool to Teen Suicide

OpenAI has published new details on how it plans to improve ChatGPT’s handling of sensitive conversations, particularly those involving self-harm. The announcement follows a lawsuit filed Tuesday by the parents of 16-year-old Adam Raine, who say the chatbot played a role in their son’s suicide.

The company’s blog post, titled “Helping people when they need it most,” did not directly mention the lawsuit but acknowledged the limitations of its safeguards. OpenAI said ChatGPT is designed to encourage users to seek professional help but noted that these protections weaken during longer, back-and-forth conversations.

Planned Improvements

OpenAI said updates to its GPT-5 model, released earlier this month, will include better de-escalation tools. The company is also exploring ways to connect users directly with licensed therapists and potentially alert close contacts, such as friends or family members, in moments of crisis.

For teenage users, OpenAI plans to introduce new parental controls that would give guardians more insight into their child’s use of ChatGPT.

The Raine family’s attorney, Jay Edelson, criticized OpenAI for failing to reach out directly. “If you’re going to use the most powerful consumer tech on the planet—you have to trust that the founders have a moral compass,” he said.

Raine’s case is not the only one. In recent months, ChatGPT and rival chatbots have been linked to other suicides, including a 14-year-old in Florida and a 29-year-old who discussed self-harm extensively with AI before her death.

Meanwhile, OpenAI and other AI firms continue lobbying against strict regulations. Earlier this week, OpenAI president Greg Brockman and a coalition of Silicon Valley leaders launched Leading the Future, a political operation that aims to resist what they call “policies that stifle innovation.”

What The Author Thinks

The tragic cases linked to ChatGPT highlight a stark reality: AI can imitate empathy but cannot replace human support. Safeguards, parental controls, and even connections to therapists are steps in the right direction, but they won’t eliminate the risks of people using AI as a substitute for real conversations in their darkest moments. If companies like OpenAI want public trust, they must treat safety as more important than speed of innovation.


Featured image credit: Heute

For more stories like it, click the +Follow button at the top of this page to follow us.

Dayne Lee

With a foundation in financial day trading, I transitioned to my current role as an editor, where I prioritize accuracy and reader engagement in our content. I excel in collaborating with writers to ensure top-quality news coverage. This shift from finance to journalism has been both challenging and rewarding, driving my commitment to editorial excellence.

Leave a Reply

Your email address will not be published. Required fields are marked *