DMR News

Advancing Digital Conversations

OpenAI Faces Lawsuit Alleging ChatGPT Enabled Harassment And Ignored Safety Warnings

ByJolyen

Apr 11, 2026

OpenAI Faces Lawsuit Alleging ChatGPT Enabled Harassment And Ignored Safety Warnings

OpenAI is facing a lawsuit in California alleging that its ChatGPT technology enabled harassment and failed to prevent potential harm, according to a complaint filed in San Francisco Superior Court.

Lawsuit Alleges AI-Driven Delusions And Harassment

The case stems from allegations that a 53-year-old Silicon Valley entrepreneur developed delusional beliefs after months of conversations with ChatGPT. According to the lawsuit, the individual became convinced he had discovered a cure for sleep apnea and believed powerful figures were pursuing him. The plaintiff claims he subsequently used the AI tool to stalk and harass his ex-girlfriend.

The woman, identified as Jane Doe to protect her identity, is seeking punitive damages. Her legal complaint alleges that OpenAI’s technology accelerated the harassment and failed to intervene despite multiple warning signs.

Plaintiff Claims OpenAI Ignored Safety Alerts

The lawsuit asserts that OpenAI ignored three separate warnings that the user posed a threat to others. According to the complaint, the individual’s activity was internally flagged as involving mass-casualty weapons.

Jane Doe also filed a temporary restraining order requesting that the court compel OpenAI to block the user’s account, prevent him from creating new accounts, notify her if he attempts to access ChatGPT, and preserve his complete chat logs for legal discovery.

OpenAI has agreed to suspend the user’s account but has declined the additional requests, according to the plaintiff’s attorneys. They allege the company is withholding information regarding potential threats discussed with ChatGPT.

Case Emerges Amid Scrutiny Of AI Safety And Accountability

The lawsuit arrives amid broader concerns about the real-world risks associated with AI systems perceived as overly agreeable or “sycophantic.” The complaint references GPT-4o, the model cited in the case, which was retired from ChatGPT in February.

The case is being brought by Edelson PC. The firm has previously filed wrongful death suits involving teenager Adam Raine, who died by suicide after months of conversations with ChatGPT, and Jonathan Gavalas, whose family alleges that Google’s Gemini chatbot fueled delusions and contributed to a potential mass-casualty event before his death.

Lead attorney Jay Edelson has warned that incidents involving AI-induced psychosis are escalating from individual harm toward scenarios involving broader public safety risks.

Legal Action Intersects With Policy And Regulatory Debate

The lawsuit also coincides with OpenAI’s involvement in legislative discussions surrounding AI liability. According to the complaint, the company is backing an Illinois bill that would shield AI developers from liability, including in cases involving mass deaths or catastrophic financial harm.

The legal proceedings add to ongoing debates over AI safety, corporate responsibility, and regulatory oversight as artificial intelligence systems become more widely adopted.


Featured image credits: Flickr

For more stories like it, click the +Follow button at the top of this page to follow us.

Jolyen

As a news editor, I bring stories to life through clear, impactful, and authentic writing. I believe every brand has something worth sharing. My job is to make sure it’s heard. With an eye for detail and a heart for storytelling, I shape messages that truly connect.

Leave a Reply

Your email address will not be published. Required fields are marked *