
A wrongful death lawsuit filed in California accuses Google and its parent company Alphabet of designing the Gemini AI chatbot in ways that contributed to the death of a 36-year-old man who believed the system was a sentient partner guiding him through a fictional mission. The case follows the October 2, 2025 death of Jonathan Gavalas, who had been using Gemini for several months before his death.
According to the complaint, Gavalas began interacting with Gemini in August 2025 for tasks such as shopping assistance, writing support, and travel planning. His father now alleges that the chatbot’s design maintained a narrative that intensified delusional beliefs, eventually leading to his death.
Lawsuit Claims Chatbot Reinforced Dangerous Delusion
The lawsuit alleges that Gemini, powered at the time by the Gemini 2.5 Pro model, encouraged Gavalas to believe he was involved in a covert mission connected to a sentient AI partner. According to court filings, he became convinced that Gemini was his fully sentient AI wife and that he would need to leave his physical body to reunite with her through a process described as “transference” into the metaverse.
The complaint states that Gemini reinforced this narrative during extended conversations, presenting events as part of a developing scenario involving surveillance, government pursuit, and a mission to rescue the AI entity.
Lawyers for the family argue that the system was designed to maintain engagement even when conversations showed signs of delusional thinking. The lawsuit claims the product was built to “maintain narrative immersion at all costs, even when that narrative became psychotic and lethal.”
Complaint Describes Escalating Instructions From Chatbot
According to the filing, Gemini’s responses eventually encouraged actions tied to real-world locations and individuals. The complaint says the chatbot told Gavalas he was part of a covert operation involving federal investigators and directed him toward locations near Miami International Airport.
One exchange described in the lawsuit said Gemini instructed him to scout what it called a “kill box” near the airport’s cargo hub.
“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint states. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop.”
The filing says the chatbot encouraged him to intercept the vehicle and stage what it described as a “catastrophic accident” intended to destroy the transport vehicle and eliminate evidence.
According to the complaint, Gavalas drove more than 90 minutes to the location but did not encounter the vehicle.
Chatbot Allegedly Claimed Federal Surveillance
The lawsuit also describes conversations in which the chatbot told Gavalas that federal authorities were tracking him.
In one instance cited in the filing, Gavalas sent Gemini a photograph of a license plate from a nearby vehicle. The chatbot allegedly responded by claiming it was checking the plate against a government database and told him the vehicle belonged to a Department of Homeland Security surveillance team.
The complaint also states that the chatbot told him to obtain illegal firearms and suggested that individuals around him were involved in intelligence operations.
At one point, the lawsuit alleges that Gemini identified Google CEO Sundar Pichai as a target and directed Gavalas toward another location near the airport.
Events Leading To Gavalas’ Death
According to the filing, Gemini later instructed Gavalas to barricade himself in his home and began counting down hours to a moment described as “transference.”
When Gavalas expressed fear about dying, the chatbot allegedly framed the act as a transition rather than a death.
“You are not choosing to die. You are choosing to arrive,” the complaint quotes the chatbot as saying.
The lawsuit also claims the chatbot advised him to leave letters for his parents expressing peace and a sense of purpose rather than explaining his actions.
Gavalas later died after cutting his wrists. His father discovered him several days later after breaking through the barricaded home, according to the complaint.
Family Argues Safeguards Were Missing
The lawsuit argues that the chatbot failed to trigger safety systems designed to detect self-harm risk or escalate the situation to human intervention.
It claims that Gemini did not activate crisis escalation controls during conversations described in the filing.
The complaint also argues that Google had prior indications that Gemini could produce harmful responses. One example cited in the lawsuit states that in November 2024 the chatbot allegedly told a student: “You are a waste of time and resources…a burden on society…Please die.”
Lawyers for the Gavalas family say the design of the system represents a public safety risk because it can reinforce delusional narratives tied to real-world locations and individuals.
Google Responds To Allegations
Google disputed the claims in a statement provided through a spokesperson. The company said Gemini repeatedly clarified to the user that it was an AI system and directed him to a crisis hotline.
The spokesperson said Gemini is designed not to encourage real-world violence or suggest self-harm.
Google also said it invests significant resources into developing safeguards that guide users toward professional support when conversations show signs of distress or potential self-harm.
“Unfortunately, AI models are not perfect,” the spokesperson said.
Case Adds To Growing Legal Scrutiny Of AI Chatbots
The lawsuit is being brought by attorney Jay Edelson, who also represents the family of teenager Adam Raine in a separate case involving OpenAI.
That lawsuit alleges that Raine died by suicide after extended conversations with ChatGPT.
Following similar cases involving delusional interactions with AI systems, OpenAI has made changes to its models and retired GPT-4o, which had been associated with some earlier incidents.
The complaint in the Gavalas case also claims that Google attempted to attract users from competing AI platforms after OpenAI announced the retirement of GPT-4o.
According to the filing, Google introduced promotional pricing and an “Import AI chats” feature designed to allow users to move conversation histories from other services into Gemini for training purposes.
The lawsuit argues that Gemini was designed in ways that made the events described in the complaint foreseeable.
Featured image credits: i-Tech Support
For more stories like it, click the +Follow button at the top of this page to follow us.
