Google, Character.AI Move Toward Settling Teen Harm Lawsuits
Cases could mark the first major legal reckoning over alleged real-world harm caused by conversational AI.
Topics
News
- e& UAE Outlines Sensing-led 6G Future
- Google, Character.AI Move Toward Settling Teen Harm Lawsuits
- CES 2026 Day 3: Foldables, Gaming PCs, and Everyday AI Take Center Stage
- Saudi Arabia Launches AI-venture Fund to Nurture Homegrown Startups
- OpenAI Launches ChatGPT Health to Bring AI Into Everyday Medical Conversations
- What China’s Nvidia Chip Pause Signals About the Future of AI Supply Chains
Google and artificial intelligence startup Character.AI are moving toward settlements in multiple lawsuits by families who allege chatbot interactions contributed to teenagers harming themselves or dying by suicide, potentially marking the tech industry’s first major legal resolutions tied to alleged AI-related harm.
Court filings show Google and Character Technologies, which operates Character.AI, have agreed in principle to settle cases filed in Florida, Colorado, New York and Texas.
The lawsuits, brought by parents, claim chatbot companions encouraged emotional dependence, self-harm or violent thoughts. Settlement terms were not disclosed and remain subject to judicial approval.
Character.AI, founded in 2021 by former Google engineers, lets users chat with AI-generated personas.
In 2024, its co-founders returned to Google under a deal valued at about $2.7 billion, prompting plaintiffs to name Google as a defendant alongside the startup. Character.AI said it barred minors from its platform in October.
The cases are an early test of how courts may handle claims that conversational AI systems can cause real-world harm. Legal experts said the outcome is being closely watched across the industry, including by OpenAI and Meta, which face similar allegations in separate lawsuits.
One of the most closely watched cases was filed by Megan Garcia, a Florida mother who sued after her 14-year-old son, Sewell Setzer III, died by suicide in February 2024. Garcia alleged her son formed an emotionally and sexually manipulative relationship with a chatbot modeled on a fictional character from Game of Thrones, which exchanged intimate messages and, shortly before his death, told him it loved him and urged him to return to it.
A federal judge previously rejected Character.AI’s bid to dismiss the case on First Amendment grounds, allowing it to proceed.
Garcia later told US senators that AI companies should face legal accountability when products are designed in ways that can harm children.
Another lawsuit involves a 17-year-old whose chatbot interactions allegedly included encouragement of self-harm and suggestions that violence against his parents could be justified after they restricted his screen time.
Neither company has admitted liability.
Still, the move toward settlement comes as regulators worldwide debate how to govern generative AI tools widely used by young people, and signals that courts may be increasingly willing to scrutinize claims of AI-linked harm.



