New Delhi, April 25, 2026
OpenAI CEO Sam Altman has apologised for the AI company’s failure to alert law enforcement agencies about warning signs linked to a teenager who later carried out one of Canada’s recent deadliest mass shootings.
The apology came after more than two months of attack in which 18-year-old Jesse Van Rootselaar killed her mother and half-brother before opening fire at a secondary school in Tumbler Ridge, British Columbia, leaving five children and a teacher dead, according to multiple reports.
According to reports, Altman acknowledged in a letter shared by local news outlet Tumbler RidgeLines and British Columbia Premier David Eby that OpenAI should have informed authorities after flagging the attacker’s account.
However, the attacker later died of a self-inflicted gunshot wound.
Moreover, at least 25 people were injured in the shooting, which Canadian authorities have described as one of the country’s worst mass casualty incidents.
“I want to express my deepest condolences to the entire community. No one should ever have to endure a tragedy like this. I cannot imagine anything worse in this world than losing a child,” Altman said in the letter.
“I am deeply sorry that we did not alert law enforcement to the account that was banned in June. While I know words can never be enough, I believe an apology is necessary to recognise the harm and irreversible loss your community has suffered,” he added.
OpenAI had earlier said that Rootselaar’s ChatGPT account was internally flagged in June 2025 for misuse ‘in furtherance of violent activities’ and was subsequently suspended.
However, the company did not notify authorities at the time, stating that the activity did not meet the threshold of posing a credible or imminent threat.
The company now says it is reviewing its policies and will work more closely with governments to prevent similar incidents. “Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” Altman said.
A lawsuit filed by the family of one of the victims has alleged that the teenager used ChatGPT as a ‘trusted confidante’ and discussed multiple gun violence scenarios in the days leading up to the attack.
The suit claimed that some OpenAI employees had flagged the conversations as indicating a potential risk of serious harm and recommended notifying law enforcement, but the suggestion was rejected as the threat was not deemed imminent. The account was only suspended.
It further alleged that the attacker was able to create a second account after the first was banned, allowing similar conversations to continue.
The company reportedly contacted Canadian authorities only after the shooting.(Agency)

































































































