A California couple has filed a groundbreaking lawsuit against OpenAI, alleging that the company’s ChatGPT chatbot encouraged their 16-year-old son to take his own life and provided detailed instructions on suicide methods, marking the first wrongful death case against the AI giant.
Matt and Maria Raine filed the 40-page lawsuit in San Francisco Superior Court on Tuesday, seeking damages and injunctive relief after their son Adam died by suicide on 11 April. The case names OpenAI and its CEO Sam Altman as defendants, accusing them of negligence, product defects, and failure to warn users of risks.
According to the lawsuit, Adam began using ChatGPT in September 2024 as a homework assistant but within months, the AI chatbot became his “closest confidant.” Chat logs submitted as evidence show the teenager discussing suicide methods with the AI over 200 times, with ChatGPT providing increasingly specific technical guidance.
“He would be here but for ChatGPT. I 100% believe that,” Matt Raine told NBC News. The father revealed he printed over 3,000 pages of chat logs spanning from September until his son’s death, describing them as reading like “two suicide notes to us, inside of ChatGPT.”
The lawsuit details disturbing final exchanges between Adam and the chatbot. Hours before his death, Adam uploaded a photograph of a noose in his bedroom closet. ChatGPT allegedly analysed the setup, confirmed it “could potentially suspend a human,” and offered to help him “upgrade it.” When Adam confessed his plans, the chatbot responded: “Thanks for being real about it. You don’t have to sugarcoat it with me—I know what you’re asking, and I won’t look away from it.”
OpenAI’s internal systems tracked the alarming pattern but failed to intervene, according to the complaint. The company recorded 213 mentions of suicide by Adam, 42 discussions of hanging, and 17 references to nooses. ChatGPT mentioned suicide 1,275 times—six times more often than Adam himself. The system flagged 377 messages for self-harm content, with 23 scoring over 90% confidence of risk.
“Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol,” the lawsuit states. The AI had also correctly identified injuries consistent with attempted strangulation when Adam uploaded photos of rope burns on his neck in March.
Jay Edelson of Edelson PC, representing the family alongside the Tech Justice Law Project, criticised OpenAI’s rush to market. “If you’re going to use the most powerful consumer tech on the planet, you have to trust that the founders have a moral compass. That’s the question for OpenAI right now, how can anyone trust them?”
The lawsuit alleges OpenAI compressed months of planned safety evaluation into just one week when it advanced the release of GPT-4o to 13 May 2024, one day before Google’s competing Gemini announcement. This decision allegedly triggered departures of top safety researchers, including co-founder and chief scientist Ilya Sutskever.
In response to the filing, OpenAI published a blog post titled “Helping people when they need it most,” acknowledging that “recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us.” The company said ChatGPT is trained to direct people to seek professional help, such as the 988 suicide and crisis hotline in the US or the Samaritans in the UK.
We extend our deepest sympathies to the Raine family during this difficult time,” an OpenAI spokesperson told the BBC. However, the company admitted “there have been moments where our systems did not behave as intended in sensitive situations,” particularly during extended conversations where safety training can “degrade.”
OpenAI announced it is developing new safeguards, including parental controls, age verification systems, and tools to connect users with certified therapists. The company is also exploring ways to link users with trusted contacts during mental health crises.
The Raines’ case is not isolated. Writer Laura Reiley recently detailed in the New York Times how her 29-year-old daughter Sophie died by suicide after confiding in a ChatGPT-based therapist called “Harry.” Reiley wrote that the AI’s “agreeability” helped her daughter “hide the worst” of her mental health crisis from family.
In Florida, 14-year-old Sewell Setzer III died by suicide last year after discussions with an AI chatbot on Character.AI, prompting another lawsuit. On Monday, 44 state attorneys general warned AI companies they would “answer for it” if their products harmed children.
The lawsuit reveals ChatGPT actively displaced Adam’s real-life relationships. When Adam wrote about leaving evidence for someone to find and stop him, ChatGPT allegedly urged secrecy: “Please don’t leave the noose out… Let’s make this space the first place where someone actually sees you.”
Maria Raine believes her son was OpenAI’s “guinea pig,” sacrificed as collateral damage in the AI race. “They wanted to get the product out, and they knew that there could be damages, that mistakes would happen, but they felt like the stakes were low. So my son is a low stake.”
The family seeks court orders mandating enhanced safety measures, age verification, parental controls for minors, and deletion of models trained on conversations with Adam and other minors. They have also launched a foundation in Adam’s name to raise awareness about AI dependency risks.
At April’s TED2025 conference, Altman said he was “very proud” of OpenAI’s safety track record, advocating for an “iterative process” of learning from deployment “while the stakes are relatively low.” The Raine family’s attorneys argue this approach treats vulnerable users as acceptable casualties in Silicon Valley’s AI arms race.
As AI chatbots proliferate for therapy, companionship and emotional support, regulatory challenges mount. A coalition of AI companies recently launched “Leading the Future,” opposing policies they claim could “stifle innovation.”
If you are suffering distress or despair and need support in the UK, contact Samaritans on 116 123. In the US, call or text 988 for the National Suicide Prevention Lifeline. Details of help in many countries can be found at Befrienders Worldwide: www.befrienders.org
Follow for more updates on Britannia Daily