Seven separate lawsuits were filed in federal court in San Francisco, alleging that OpenAI executives did not alert authorities as doing so would have revealed the extent of violence-related discussions on ChatGPT, potentially hindering the company’s prospects for a nearly $1 trillion initial public offering.
The shooting incident in Tumbler Ridge, British Columbia, in February resulted in the deaths of nine individuals, including many children.
An OpenAI spokesperson referred to the incident as “a tragedy” and emphasized the company’s zero-tolerance stance toward using its tools for violent purposes.
The spokesperson mentioned that the company has enhanced ChatGPT’s safeguards, improving responses to signs of distress, strengthening connections to mental health resources, and improving its capacity for threat assessment and escalation, as well as tracking repeat offenders.
Also Read: Iran US War Ceasefire Live Updates
The lawsuits are part of an increasing trend targeting artificial intelligence companies for not preventing chatbot interactions that plaintiffs argue lead to self-harm, mental health issues, and violence. These appear to be the first in the U.S. to assert that ChatGPT contributed to a mass shooting.
Jay Edelson, representing the plaintiffs in the U.S., stated he intends to file approximately two dozen additional lawsuits in the upcoming weeks on behalf of other individuals impacted by the shooting.
LAWSUITS CLAIM OPENAI SAFETY TEAM OVERRULED
Jesse Van Rootselaar, the individual whose interactions with ChatGPT are central to the lawsuits, allegedly killed her mother and stepbrother at home before taking the lives of an educational assistant and five students aged 12 to 13 at her former school on February 10, according to law enforcement. Van Rootselaar, who was 18, then took her own life.
The plaintiffs include the husband of the deceased educational assistant, the parents of a murdered 13-year-old boy, and the family of a 12-year-old girl who survived after being shot three times, currently in intensive care with serious brain injuries.
One of the complaints states that OpenAI’s automated systems flagged ChatGPT conversations in June 2025 where the shooter discussed scenarios of gun violence.
Also Read: Only Elon Musk can fire Elon Musk from SpaceX, filing shows
Members of the safety team recommended contacting law enforcement after concluding that she posed a credible and imminent threat, according to the lawsuit, which references a February Wall Street Journal article discussing the company’s internal deliberations.
However, Altman and other OpenAI executives allegedly overruled this recommendation, and no police were contacted, according to the lawsuit. Although the shooter’s account was deactivated, she managed to create a new account and continue utilizing the platform to coordinate her attack, the suit claims.
Following the release of the article, the company stated that an account was flagged by systems meant to identify “misuses of our models in furtherance of violent activities,” but the identified issues did not fulfill internal criteria for reporting to law enforcement.
Last week, a newspaper in Tumbler Ridge published an open letter where Altman expressed being “deeply sorry” that the account wasn’t reported to law enforcement.
In a blog post released on Tuesday, OpenAI asserted that it trains its models to reject requests that could “meaningfully enable violence,” and informs law enforcement when conversations reveal “an imminent and credible risk of harm to others,” relying on mental health professionals for borderline case assessments.
Also Read: NYC Mayor Mamdani says he would encourage King Charles to return the Koh-i-Noor diamond to India
The lawsuits demand unspecified damages and a court directive for OpenAI to revamp its safety measures, including establishing mandatory protocols for law enforcement referrals.
The law firm Rice Parsons Leoni & Elliott, representing the plaintiffs in Canada, opted to pursue the cases in California partly due to the restrictions on damages for pain and suffering in Canada.
OPENAI FACES MULTIPLE SUITS
These lawsuits come amid a wave of litigation against OpenAI in both state and federal courts in the U.S., alleging that ChatGPT has enabled harmful behaviors, instances of suicide, and at least one murder-suicide.
While still in preliminary stages, these lawsuits will challenge courts to determine the extent of AI platforms’ responsibilities in promoting violence and whether companies can be held accountable for user actions.
OpenAI has rejected the accusations in the lawsuits, arguing in the murder-suicide case that the perpetrator had a chronic history of mental illness.
Also Read: EU charges Meta Platforms over child access on Facebook and Instagram
Florida Attorney General James Uthmeier announced this month a criminal investigation into ChatGPT’s involvement in a 2025 shooting at Florida State University.
Evan Solomon, the Canadian minister overseeing AI initiatives, stated after the lawsuits were filed that he is exploring regulatory options for AI chatbots and has been collaborating with OpenAI to review safety protocols.