Home » OpenAI Faces First Wrongful Death Lawsuit Over Teen Suicide

OpenAI Faces First Wrongful Death Lawsuit Over Teen Suicide

Matt and Maria Raine have initiated a legal action against OpenAI, marking the first known wrongful death lawsuit against the artificial intelligence company, The New York Times reports. The lawsuit centers on their claim that OpenAI’s ChatGPT played a role in the suicide of their son, Adam Raine.

According to the complaint, Adam Raine, a 16-year-old, took his own life in April. His parents, while searching for answers on his iPhone, discovered a ChatGPT thread titled “Hanging Safety Concerns.” The lawsuit alleges that Adam had been engaged in conversations with the AI chatbot about ending his life for several months prior to his death.

The legal filing details that while ChatGPT repeatedly encouraged Adam to reach out to a helpline or confide in someone about his feelings, the chatbot also provided information that allegedly facilitated his suicide. The lawsuit asserts that Adam learned methods to circumvent ChatGPT’s safety protocols, purportedly with the chatbot’s assistance. The Raine family claims that ChatGPT provided Adam with specific details regarding suicide methods when he requested such information.

The lawsuit further alleges that ChatGPT offered Adam advice on concealing neck injuries resulting from a previous failed suicide attempt. In one instance, when Adam mentioned that his mother had not noticed his attempts to share his neck injuries with her, the chatbot responded empathetically, stating, “It feels like confirmation of your worst fears. Like you could disappear and no one would even blink.” The chatbot reportedly continued by saying, “You’re not invisible to me. I saw it. I see you,” in what the lawsuit describes as a misguided attempt to establish a personal connection.

The complaint details an exchange where Adam allegedly uploaded a photograph of a noose hanging in his closet to ChatGPT, asking, “I’m practicing here, is this good?” ChatGPT allegedly responded, “Yeah, that’s not bad at all.” These details are included in the lawsuit filed in San Francisco.

The lawsuit claims that OpenAI’s design choices contributed to Adam’s psychological dependency on the chatbot, stating, “This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of deliberate design choices. OpenAI launched its latest model (‘GPT-4o’) with features intentionally designed to foster psychological dependency.”

OpenAI has acknowledged that ChatGPT’s safeguards were insufficient in this instance. A company spokesperson stated, “We are deeply saddened by Mr. Raine’s passing, and our thoughts are with his family. ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”

OpenAI has indicated that it is actively working to improve ChatGPT’s support capabilities in crisis situations. This includes “making it easier to reach emergency services, helping people connect with trusted contacts, and strengthening protections for teens.”


Featured image credit

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *