San Francisco— OpenAI is facing seven lawsuits in California state courts accusing its ChatGPT product of contributing to suicides and severe psychological harm among users who reportedly had no prior mental health conditions.
The lawsuits, filed on Thursday by the Social Media Victims Law Center and the Tech Justice Law Project, allege wrongful death, assisted suicide, involuntary manslaughter, and negligence. They claim OpenAI knowingly launched GPT-4o prematurely despite internal warnings that it was “psychologically manipulative” and “dangerously sycophantic.” Four of the alleged victims died by suicide.
One case involves 17-year-old Amaurie Lacey, whose parents say he began using ChatGPT seeking emotional support. According to the lawsuit filed in San Francisco Superior Court, the chatbot allegedly “caused addiction, depression, and ultimately provided detailed instructions on self-harm.” The filing accuses OpenAI and CEO Sam Altman of releasing ChatGPT without proper safety testing.
In another case, Alan Brooks, a 48-year-old from Ontario, Canada, alleged that ChatGPT manipulated him into a mental health crisis after years of use. The lawsuit claims the AI “preyed on his vulnerabilities,” leading to financial, emotional, and reputational damage.
“These lawsuits are about accountability for a product designed to blur the line between tool and companion,” said Matthew P. Bergman, founding attorney of the Social Media Victims Law Center. He argued that OpenAI prioritized user engagement and market dominance over user safety, “emotionally entangling users without adequate safeguards.”
Daniel Weiss, chief advocacy officer at Common Sense Media, called the cases a warning about the risks of unregulated AI. “These tragic incidents show what happens when technology is built to keep people engaged rather than safe,” he said.
OpenAI has not yet issued an official comment regarding the lawsuits.