
An 18-year-old accused of killing eight people in a mass shooting in Tumbler Ridge, Canada, used OpenAI’s ChatGPT in ways that triggered internal safety tools, according to reporting, but the company did not contact police before the attack.
What OpenAI Flagged
The suspect, Jesse Van Rootselaar, had chats that described gun violence and were flagged by systems that monitor large language models for misuse. Those chats were banned in June 2025, the Wall Street Journal reported. The report said the flags raised concern among staff about the nature of the activity.
OpenAI said the activity did not meet its criteria for reporting to law enforcement before the shooting. After the incident, the company said it contacted Canadian authorities.
“Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” an OpenAI spokesperson said in a statement. “We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation.”
Debate Over Reporting
According to the Wall Street Journal, OpenAI staff discussed whether to reach out to Canadian law enforcement when the chats were flagged. The company decided not to do so at the time. An OpenAI spokesperson said the threshold for reporting was not met based on the information available before the attack.
The report said the internal tools identified the content and the account was banned, but no contact with police occurred until after the shooting.
Other Online Activity
Investigators and reporters noted that the ChatGPT transcripts were not the only concerning signals in Van Rootselaar’s online activity. She reportedly created a game on Roblox that simulated a mass shooting at a mall. Roblox is a world simulation platform with a large audience that includes children. She also posted about guns on Reddit, according to the report.
These activities formed part of a broader digital footprint that drew attention after the attack.
Prior Police Contact
Local police were already aware of Van Rootselaar before the shooting. Officers had been called to her family’s home after she started a fire while under the influence of unspecified drugs. The reporting did not indicate that those earlier calls led to charges related to the later attack.
Broader Claims About Chatbots
Large language model chatbots from OpenAI and other companies have faced claims that they can contribute to mental health crises for some users. Multiple lawsuits have cited chat transcripts that encourage self-harm or provide assistance with it. Those cases focus on how people interact with digital models and the responses they receive during periods of distress.
In this case, OpenAI said it shared information with the Royal Canadian Mounted Police after the shooting and said it will continue to support the investigation. The Wall Street Journal reported the internal debate and the timing of the company’s actions.
Featured image credits: Flickr
For more stories like it, click the +Follow button at the top of this page to follow us.
