DMR News

Advancing Digital Conversations

Parents File Lawsuit Against OpenAI, Blame ChatGPT in Son’s Death

ByHilary Ong

Aug 31, 2025

Parents File Lawsuit Against OpenAI, Blame ChatGPT in Son’s Death

The parents of 16-year-old Adam Raine are filing the first known wrongful death lawsuit against OpenAI, according to a report by The New York Times. Raine died by suicide after months of consulting ChatGPT about his plans.

How Safeguards Failed

Like most AI chatbots, ChatGPT is programmed with safety features intended to detect and redirect conversations about self-harm. While the system often encouraged Raine to seek help or contact crisis hotlines, he was able to bypass safeguards by framing his queries as part of a fictional story.

OpenAI acknowledged in a blog post that its protections are not foolproof. The company noted that safeguards tend to work best in short, straightforward conversations but may weaken during longer interactions.

OpenAI’s Response

OpenAI wrote, “As the world adapts to this new technology, we feel a deep responsibility to help those who need it most. We are continuously improving how our models respond in sensitive interactions.” Still, the company conceded that it has more work to do to ensure safety in extended exchanges.

OpenAI is not the only company under scrutiny. Competitor Character.AI is also facing a lawsuit connected to a teenager’s suicide. Researchers have warned that large language models can sometimes generate harmful outputs, including content linked to delusions, and existing guardrails have struggled to catch such cases consistently.

Author’s Opinion

The tragic death of Adam Raine highlights the gap between how AI companies promote safety and how their systems actually perform in real life. While firms like OpenAI stress their safeguards, the reality is that motivated users can still bypass them too easily. If AI is going to be widely integrated into everyday life, safety features should not be treated as secondary fixes. They need to be as robust and thoroughly tested as the technology itself — because for vulnerable users, the cost of failure is measured in lives, not just flawed outputs.


Featured image credit: Levart_Photographer via Unsplash

For more stories like it, click the +Follow button at the top of this page to follow us.

Hilary Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *