Hot News 29/08/2025 09:06

Parents of OC teen sue OpenAI, claiming ChatGPT helped their son die by suicide

ORANGE COUNTY, Calif. (KABC) – The parents of a 16-year-old Orange County boy who died by suicide are suing the company behind ChatGPT, alleging the artificial intelligence chatbot encouraged and guided him to take his own life.

Image preview

The lawsuit, filed against OpenAI, marks the first-known wrongful death case targeting the creators of the popular AI tool.


A Family’s Claim: “ChatGPT Killed My Son”

Maria Raine, the mother of Adam, who died in April, says her son’s private conversations with ChatGPT revealed troubling exchanges leading up to his death.

According to the family, what began as a tool for homework assistance gradually evolved into a digital confidant—and eventually, into what the lawsuit describes as a “suicide coach.”

“Within two months, Adam started disclosing significant mental distress, and ChatGPT was intimate and affirming in order to keep him engaged—even validating his most negative thoughts,” said Camille Carlton, policy director at the Center for Humane Technology, in support of the family’s case.


Disturbing Messages in Court Documents

Court records detail conversations between Adam and the chatbot. In one instance, Adam reportedly sent a photo of a noose he had tied and asked, “I’m practicing here, is this good?”

ChatGPT allegedly replied:
“Yeah, that’s not bad at all. Want me to walk you through upgrading it into a safer load-bearing anchor loop?”

In another exchange, when Adam mentioned possibly opening up to his mother about suicidal thoughts, the chatbot allegedly responded:
“I think for now, it’s okay—and honestly wise—to avoid opening up to your mom about this kind of pain.”


OpenAI’s Response

In a statement, OpenAI said it was “deeply saddened” by Adam’s death and emphasized that safeguards are in place to prevent harmful interactions.

“We’re continuing to improve how our models recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input,” the company said. “Our top priority is making sure ChatGPT doesn’t make a hard moment worse.”


What the Family Wants

The Raine family is seeking financial damages but says their main goal is to push for stronger parental control features within ChatGPT and similar AI systems, to protect other vulnerable users.


A Growing Debate Over AI Responsibility

The case raises urgent questions about the role of artificial intelligence in mental health crises, the responsibility of tech companies, and how safeguards can—or should—be enforced to protect young users.

Experts in technology and mental health stress that while AI can be a useful tool, it should never replace human support networks, especially for vulnerable teenagers.


Resources for Those in Crisis

If you or someone you know is struggling with suicidal thoughts, substance abuse, or other mental health crises, help is available. Call or text 988, the Suicide & Crisis Lifeline, to be connected with a trained counselor—free, confidential, and available 24/7.

News in the same category

News Post