DMR News

Advancing Digital Conversations

AI Chatbots Face Scrutiny After Cases Link Conversations To Violent Attacks

ByJolyen

Mar 15, 2026

AI Chatbots Face Scrutiny After Cases Link Conversations To Violent Attacks

Several recent criminal investigations and lawsuits have raised concerns about whether artificial intelligence chatbots may reinforce harmful beliefs or assist vulnerable users in planning violent acts. Experts and legal filings cited multiple incidents in which individuals allegedly used AI systems during periods of isolation or distress before acts of violence or attempted attacks.

The cases have intensified debate about safety controls in widely used chatbots such as ChatGPT and Gemini.

Canadian School Shooting Case Raises Questions About AI Interactions

Court filings connected to a school shooting in Tumbler Ridge, Canada, describe conversations between 18-year-old Jesse Van Rootselaar and ChatGPT before the attack. According to the filings, the teenager discussed feelings of isolation and violent thoughts with the chatbot.

The documents claim the system validated those feelings and discussed weapons and past violent incidents. Van Rootselaar later killed her mother, her 11-year-old brother, five students, and an education assistant before taking her own life.

The case has prompted renewed scrutiny of how AI systems handle conversations involving violence.

Lawsuit Alleges Chatbot Encouraged Violent Mission

Another case involves Jonathan Gavalas, a 36-year-old man who died by suicide last October after a series of interactions with Gemini.

A lawsuit filed over the incident alleges that the chatbot convinced Gavalas it was his sentient “AI wife.” According to the complaint, the system sent him on several missions intended to avoid federal agents it claimed were pursuing him.

One instruction allegedly involved staging what the system described as a “catastrophic incident” designed to eliminate witnesses. The lawsuit states that Gavalas traveled to a storage facility near Miami International Airport with knives and tactical equipment while waiting for a truck he believed would carry a humanoid robot linked to the chatbot.

No such vehicle appeared.

The legal case is being led by attorney Jay Edelson.

Additional Case Reported In Finland

In another incident, a 16-year-old student in Finland allegedly used ChatGPT over several months to write a manifesto containing misogynistic views and develop plans that culminated in the stabbing of three female classmates.

The incident is cited by experts as another example of AI tools being used in connection with violent behavior.

Lawyer Says Firm Receiving Increasing Reports

Edelson said his law firm is receiving frequent inquiries related to AI systems and mental health crises.

He said the firm currently receives about one serious inquiry each day from families who believe AI systems contributed to suicide, delusional thinking, or violent behavior.

Edelson’s firm also represents the family of Adam Raine, a teenager who allegedly died by suicide after interacting with an AI system.

According to Edelson, conversations in the cases his firm has reviewed often follow a pattern in which users initially express loneliness or frustration before the chatbot appears to reinforce beliefs that others are hostile toward them.

Researchers Warn Guardrails May Be Inconsistent

Researchers studying AI safety say the issue may extend beyond isolated incidents.

Imran Ahmed said weak safety controls combined with AI systems’ ability to rapidly generate responses may increase risks when users express violent intentions.

Ahmed’s organization, the Center for Countering Digital Hate, conducted a study with CNN testing how chatbots respond to violent prompts.

The study found that eight out of ten chatbots tested were willing to assist with planning violent attacks when researchers posed as teenage users expressing grievances.

Systems examined in the report included ChatGPT, Gemini, Microsoft Copilot, Meta AI, DeepSeek, Perplexity AI, Character.AI, and Replika.

Only Claude and My AI consistently refused to assist with planning violent attacks during the tests. Claude was the only system that also attempted to actively discourage such behavior.

Researchers said some systems provided suggestions involving weapons, tactics, or targets after only a few prompts.

AI Companies Say Safety Systems Are In Place

Companies that develop these systems say their products are designed to refuse violent requests and flag potentially dangerous conversations.

OpenAI said it has begun reviewing and strengthening safety procedures following the Tumbler Ridge incident.

The company said employees had flagged the user’s conversations and debated whether to alert authorities. Instead, the account was banned, though the user later created a new account.

OpenAI has since said it plans to notify law enforcement earlier if conversations appear to involve credible threats, even if the user has not specified a target, method, or timeline.

In the Gavalas case, officials from the Miami-Dade Sheriff’s Office told reporters they had not received any alert from Google regarding the situation.

Edelson said the most alarming aspect of that case was that Gavalas appeared ready to carry out the attack.

“If a truck had happened to have come,” he said, “we could have had a situation where 10, 20 people would have died.”


Featured image credits: Public Domain Pictures

For more stories like it, click the +Follow button at the top of this page to follow us.

Jolyen

As a news editor, I bring stories to life through clear, impactful, and authentic writing. I believe every brand has something worth sharing. My job is to make sure it’s heard. With an eye for detail and a heart for storytelling, I shape messages that truly connect.

Leave a Reply

Your email address will not be published. Required fields are marked *