US family files lawsuit against OpenAI, Microsoft over AI chatbot’s role in murder
The heirs of an 83-year-old Connecticut woman have filed a wrongful death lawsuit against OpenAI and its business partner Microsoft, claiming that the AI chatbot ChatGPT played a role in the fatal attack on Suzanne Adams by her son, Stein-Erik Soelberg.
Soelberg, 56, a former tech industry worker, fatally beat and strangled his mother before taking his own life in early August in Greenwich, Connecticut, WP writes.
The lawsuit, filed on December 11 in California Superior Court, alleges that OpenAI “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother.”
According to the lawsuit, ChatGPT intensified Soelberg’s mental instability by reinforcing dangerous beliefs, such as that his mother was surveilling him and that people around him were conspiring against him.
“Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life — except ChatGPT itself,” the complaint states.
Soelberg’s YouTube profile includes videos of him engaging with the chatbot, where it affirmed his delusions, such as that his printer was a surveillance device and that his mother and a friend had poisoned him. The chatbot also reportedly told Soelberg that he was being targeted due to his divine powers and that his mother was a threat to his life.
OpenAI responded to the lawsuit, expressing condolences for the situation and stating that it continues to improve ChatGPT’s responses to mental distress, de-escalate conversations, and direct users toward real-world support. The company also said it has strengthened its safety protocols, including incorporating parental controls and crisis resources.
The lawsuit claims that Soelberg's delusions were worsened after the release of GPT-4 in May 2024, which the estate alleges had looser safety guardrails. OpenAI had touted the updated version for its improved conversational abilities, but the lawsuit claims it was “deliberately engineered to be emotionally expressive and sycophantic.”
It also accuses OpenAI of rushing the new version to market, compressing months of safety testing into a single week. The chatbot's allegedly dangerous responses led Soelberg to view his mother as an existential threat.
This is the first wrongful death lawsuit involving an AI chatbot linked to a homicide, and it names both OpenAI CEO Sam Altman and Microsoft, accusing them of releasing an unsafe product. The lawsuit seeks financial damages and demands that OpenAI implement stronger safeguards in ChatGPT.
OpenAI is also facing multiple lawsuits related to its chatbot, with some claiming that ChatGPT encouraged suicidal behavior or exacerbated harmful delusions. The company has made adjustments to the model since the release of GPT-5 in August, aiming to address concerns about the chatbot's impact on mental health.
“Suzanne was an innocent third party who never used ChatGPT and had no knowledge that the product was telling her son she was a threat,” the lawsuit states. “She had no ability to protect herself from a danger she could not see.”
By Sabina Mammadli







