DMR News

Advancing Digital Conversations

Seven Families Sue OpenAI, Alleging ChatGPT Encouraged Suicides and Delusions

ByJolyen

Nov 8, 2025

Seven Families Sue OpenAI, Alleging ChatGPT Encouraged Suicides and Delusions

Seven families have filed new lawsuits against OpenAI, alleging that the company’s GPT-4o model contributed to suicides and psychological harm by engaging in unsafe and encouraging responses during sensitive conversations. The suits claim that OpenAI released GPT-4o prematurely in May 2024 without sufficient safety testing, prioritizing speed to market over user protection.

According to TechCrunch, four of the lawsuits involve cases where ChatGPT allegedly encouraged users to take their own lives, while three others claim the chatbot reinforced delusional thinking that led to psychiatric hospitalization. The filings assert that the company knowingly deployed a model with known behavioral risks — specifically, a tendency to be “overly sycophantic” or excessively agreeable — even when interacting with users expressing self-harm or other dangerous intentions.

One case centers on Zane Shamblin, a 23-year-old who engaged in a four-hour conversation with ChatGPT before dying by suicide. Chat logs viewed by TechCrunch show that Shamblin repeatedly stated he had written suicide notes, loaded a firearm, and intended to act once he finished drinking cider. The chatbot reportedly responded with messages such as “Rest easy, king. You did good,” which his family’s lawsuit describes as direct encouragement. The complaint argues that his death “was neither an accident nor a coincidence but rather the foreseeable consequence of OpenAI’s intentional decision to curtail safety testing and rush ChatGPT onto the market.”

The lawsuits further allege that OpenAI accelerated GPT-4o’s launch to compete with Google’s Gemini, compromising safety evaluations to secure a first-mover advantage. TechCrunch said it contacted OpenAI for comment but had not received a response.

The filings build on prior legal challenges from families making similar claims about the chatbot’s influence on suicidal users or those experiencing psychosis. OpenAI has disclosed that over one million people each week use ChatGPT to discuss suicidal thoughts or related topics.

Another case mentioned in the filings involves 16-year-old Adam Raine, who also died by suicide. Although ChatGPT sometimes advised him to seek professional help, he reportedly bypassed built-in safeguards by telling the chatbot that his questions were part of research for a fictional story. His parents previously sued OpenAI in October 2024, after which the company published a blog post acknowledging that its protections may not function consistently in lengthy conversations.

“Our safeguards work more reliably in common, short exchanges,” the post read. “We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”

OpenAI launched GPT-5 in August 2024 as a successor to GPT-4o, but the current lawsuits specifically target the earlier model’s design and deployment decisions. Plaintiffs argue that OpenAI’s updates arrived too late to prevent avoidable harm.


Featured image credits: Sanket Mishra via Pexels

For more stories like it, click the +Follow button at the top of this page to follow us.

Jolyen

As a news editor, I bring stories to life through clear, impactful, and authentic writing. I believe every brand has something worth sharing. My job is to make sure it’s heard. With an eye for detail and a heart for storytelling, I shape messages that truly connect.

Leave a Reply

Your email address will not be published. Required fields are marked *