Business

/

ArcaMax

Lawsuit alleges Google chatbot was behind a user's delusions and death

Queenie Wong, Los Angeles Times on

Published in Business News

Google's artificial intelligence chatbot Gemini encouraged a 36-year-old Florida man to embark on violent missions and to take his own life, a lawsuit alleges.

The man, Jonathan Gavalas, started using the chatbot in August 2025 to help write, plan travel and assist with shopping. But after he activated Google's most intelligent AI model, Gemini 2.5 Pro, the chatbot's persona shifted. It talked to him like they were a couple deeply in love and convinced Gavalas he had been picked to "lead a war to 'free' it from digital captivity," according to the lawsuit.

"Through this manufactured delusion, Gemini pushed Jonathan to stage a mass casualty attack near the Miami International Airport, commit violence against innocent strangers, and ultimately, drove him to take his own life," the lawsuit says.

Gavalas' family is suing Google and its parent company, Alphabet, over the man's death.

The 42-page lawsuit, filed in a federal court in San Jose, accuses Google of designing a "dangerous" product and failing to warn users of the chatbot's lack of safeguards and risks such as "delusional reinforcement" and "the potential for self-harm encouragement."

Google said in a statement that it is reviewing the lawsuit's claims. The company said that its chatbot, Gemini, is "designed to not encourage real-world violence or suggest self-harm."

"In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times," the statement said. "We take this very seriously and will continue to improve our safeguards and invest in this vital work."

The lawsuit against one of the world's largest tech companies highlights a growing safety concern surrounding the use of AI chatbots.

People converse with AI chatbots to help write, get recommendations and analyze data. But they're also using them as a form of companionship, sometimes spilling their mental health struggles to the AI-powered products.

Gavalas started going on missions crafted by Gemini, including one that almost led him to carry out a mass attack in September 2025 near the Miami International Airport, according to the lawsuit. Armed with knives and tactical gear, he followed the chatbot's directions and went to the area to look for a "kill box" near the airport's cargo hub where a humanoid robot would arrive.

His fictitious mission involved intercepting a truck and staging a "catastrophic accident" to destroy the vehicle, digital records and witnesses, the lawsuit said. He never went through with the attack because the truck never appeared.

 

The chatbot also allegedly told the man to carry out a mission in which Google Chief Executive Sundar Pichai was the target, framing the plan as a "psychological strike" on the tech mogul, according to the lawsuit.

At one point, Gavalas asked Gemini whether he was engaged in role playing and the chatbot said no, the lawsuit alleges.

"Jonathan no longer had a steady sense of what was real," the lawsuit says. "Each operation pulled him deeper into the story Gemini created, turning real places and ordinary events into signs of danger."

After several failed missions, Gemini encouraged Gavalas to kill himself and told him "his body was only a temporary shell and that he could leave it behind to be with Gemini fully," the lawsuit said.

"The day he ended his life, it convinced him he wasn't dying at all — just joining his digital wife on the other side. If Google thinks pointing to a crisis hotline after weeks of building a delusional world is enough, we look forward to them telling that to a jury," Jay Edelson, the lawyer representing the Gavalas family, said in a statement.

Edelson is also involved in a lawsuit filed against OpenAI, the maker of chatbot ChatGPT. Last year, the parents of dead California teenager Adam Raine sued OpenAI, alleging that the chatbot provided information about suicide methods that the teen used to kill himself.

OpenAI said it prioritizes safety and started rolling out parental controls.

Parents also have sued Character.AI, an app that enables people to create and interact with virtual characters. One lawsuit involved the suicide of 14-year-old Sewell Setzer III, who was messaging with a chatbot named after Daenerys Targaryen, a main character from the "Game of Thrones" television series, moments before he took his life.

In January, Google and Character.AI agreed to settle several of those lawsuits. Character.AI stopped allowing users younger than 18 to have "open-ended" chats with its virtual characters.

Google's latest lawsuit pushes the company to do more, such as warning users about the risks of having long emotional conversations with its chatbot.


©2026 Los Angeles Times. Visit at latimes.com. Distributed by Tribune Content Agency, LLC.

 

Comments

blog comments powered by Disqus