OpenAI Management Withheld Warnings of Tumbler Ridge Mass Shooter

This news should end OpenAI. Their management officially states they not only buried danger warnings, they did so intentionally, choosing not to warn of coming mass murder.

According to the Wall Street Journal, which first reported the story, “about a dozen staffers debated whether to take action on Van Rootselaar’s posts.”

Some had identified the suspect’s usage of the AI tool as an indication of real world violence and encouraged leaders to alert authorities, the US outlet reported.

But, it said, leaders of the company decided not to do so.

In a statement, a spokesperson for OpenAI said: “In June 2025, we proactively identified an account associated with this individual [Jesse Van Rootselaar] via our abuse detection and enforcement efforts, which include automated tools and human investigations to identify misuses of our models in furtherance of violent activities.”

[…]

OpenAI has said it will uphold its policy of alerting authorities only in cases of imminent risk because alerting them too broadly could cause unintended harm.

On February 10, 2026, Van Rootselaar shot two people at the family home, then went to Tumbler Ridge Secondary School and killed six more people, including five children, before committing suicide. Twenty-five others were injured.

To be clear, OpenAI claims to be an intelligence product. It claims to predict accurately and wants to be a defense tool. And yet here we see exactly the opposite. Allowing children to be murdered. The shooter here is incidental. We are reading that every future flagged user gets the same cost-benefit analysis run to enable potential victims.

First, they claim their detection system works. They tout that they proactively identified the account, flagged it, had human reviewers examine it, and banned it. They present this as responsible behavior.

Second, what they say is they had concern for families, what they mean is concern only for themselves and user retention. Actually alerting anyone who could have helped prevent tragedy is being positioned as if harmful to… perception of OpenAI! The company argues that preventing mass death could be distressing for the young people and their families.

How?

This is the terrifying logic of a doctor who says they didn’t want to alarm a patient with a cancer diagnosis, so they watched preventable death as a silent show of concern.

Big Tech billionaires are exhibiting historic levels of cruelty towards society, as if to usher in increasing harms

OpenAI built a surveillance system that scans private conversations. They bank on being able to predict. Yet they claimed after mass murder that nobody was warned of a known threat, on purpose, because such warnings would change user perception of being constantly under surveillance. But, and this is a huge one, they could have quietly cooperated with RCMP and said nothing publicly.

This is canon in big tech. You warn. You save lives. I’ve built these sausage factories as Head of Trust for the largest data storage products and know from decades on the inside. OpenAI failed basic duty.

The system detected a significant threat. Employees recognized it as serious enough to debate reporting and urged disclosure. OpenAI management overruled them to protect their market value, arguing that enabling a probable mass shooter, by burying intelligence reports, outweighed protecting the eventual victims.

That’s a cruel policy choice that prioritized company power and selfish gain over public safety. Cruelty explains why now they are announcing they detected the shooter long ago, to defend their arrogant “intelligence” reputation, so they can monetize credit for detection despite NOT helping with prevention.

They are literally up-selling detection power after the fact, emphasizing OpenAI management controls public fate, while families grieve the preventable dead.

I can not emphasize enough how this should make OpenAI directly responsible, enabling mass murder. They assumed a duty by constructing the system. The legal term is “voluntary assumption of duty”: once you undertake to act, you can be liable for doing so negligently.

OpenAI goes far beyond negligence. Either they didn’t check whether authorities already knew about this person, which is already crossing the line and negligent, or they did know and still declined, which is far worse.

The people who actually suffered unintended harm were the eight dead and twenty-five wounded. OpenAI’s framing cruelly inverts the duty of care by positioning user discomfort of being reported as equivalent to or greater than the risk of mass death.

Authorities already had multiple contacts with Van Rootselaar before the shooting, had apprehended him under the Mental Health Act more than once, and had previously removed guns from the residence. A report from OpenAI would have fit within an existing file of law enforcement. There was ZERO risk of a cold call if corroborating evidence was for an active concern. OpenAI effectively is lying in their calculated framing of who is at risk and why.

OpenAI’s entire “over-enforcement” defense collapses against the fact that RCMP already had an active file. OpenAI’s entire “we knew eight months early” should be used to shut them down.

They knew, they had the power to act, they chose not to, and now they want to be rewarded for knowing. That’s the architecture of impunity for mass murder.

Related: Judges are unable to find a jury to put OpenAI co-founder Elon Musk on trial because he is so hated for being unaccountable for crimes against humanity.

I believe it would be to the benefit of the human race for Mr. Musk to be sent to prison.

ICE Executed U.S. Citizen March 2025 and Hid Report

Ruben Ray Martinez was shot to death in South Padre Island on March 15, 2025, in an almost identical procedure to the execution of Renee Good.

…HSI group supervisory special agent utilized his government-issued service weapon, discharging multiple rounds at the driver through the open driver’s side window.

The news of the execution was covered locally, yet ICE buried the report. The obvious connection to ICE procedures being videoed in Minneapolis was not made until now.

On January 7, Renée Nicole Good, a 37-year-old writer and mother of three, was fatally shot by an ICE agent during an immigration operation in Minneapolis. 

Less than three weeks later, on January 24, Alex Pretti, a 37-year-old U.S. citizen and intensive care nurse, was shot and killed by Customs and Border Protection agents in Minneapolis during the same enforcement deployment. 

The video of ICE publicly executing Good lays bare the methods used on U.S. citizens. An agent stands near the front of a vehicle while he and others intentionally escalate fear, threatening and agitating the driver, then they shoot to kill through the driver side window at close range.

In a sworn statement given to lawyers representing Martinez’s family, Orta described the events leading up to the shooting as “spontaneous” and “lighthearted,” recounting a trip to visit friends on South Padre Island just days after Martinez’s birthday.

According to Orta, the two were driving cautiously through traffic when local and state officers approached their car. He said Martinez never struck anyone, accelerated dangerously, or posed a threat, contradicting official claims.

“The trooper seemed to be trying to get in front of the car, like he wasn’t moving out of the way when we tried to turn around and leave like the police officer told us to do,” Orta said in a witness statement obtained by Newsweek and earlier reported by The New York Times on Monday.

Orta said that a federal agent fired multiple shots at Martinez from just a few feet away without issuing any warnings or giving him a chance to comply. He recounted that officers delayed medical aid for at least 10 minutes after Martinez was shot and handcuffed him while he was clearly unconscious.

“Following the shooting, law enforcement pulled Ruben from the car while he was clearly unconscious or already dead,” Orta wrote. “Despite this, they put him face down on the pavement and handcuffed him. At least 10 minutes passed before any tried CPR or other treatment on Ruben.”

“Ruben was driving cautiously in traffic in his proper lane and certainly did not strike anyone with his vehicle,” Orta wrote.

Got ICE?

Why I Replaced OpenClaw: Wirken for a Secure Agentic World

At least four platforms now compete for the right to run autonomous agents against your messaging channels and business data. One of them has 341,000 GitHub stars. Does that mean anything?

The Star

OpenClaw is the most-starred software project on GitHub. Most. Biggest. And NOT in a good way. Like the Titanic way. It passed React’s 13-year record in 60 days. On January 26, 2026, the repository gained at least 30,000 stars in a single day. Suspicious? Every star from #10,000 through #40,000 in the GitHub API carries the same date. Independent analysis of the GitHub Archive found multiple single-day jumps above 25,000 stars, a pattern that typically signals, wait for it, scripted starring.

The Star-Belly Sneetches have bellies with stars. The Plain-Belly Sneetches don’t have them on thars. Source: Dr. Seuss
I know, what are the chances that a guy writing agent automation software has automated his agents to generate stars?

The sharper number is the one GitHub doesn’t put on the front page. OpenClaw has 341,000 stars and 1,691 subscribers. A subscriber is someone who chose to watch the repository. They get notifications, they follow development. The star-to-subscriber ratio is 202:1. For comparison: Linux is 28:1. React is 37:1. Kubernetes is 38:1. Deno, the highest comparable outlier, is 77:1. OpenClaw’s ratio is nearly three times the next worst.

341,000 identities clicked a button. 1,691 are paying attention. That makes it the worst repo, and by far.

What the Star Belies

Anthropic’s Claude Code is a strong agent platform, yet it’s vertically integrated as one provider. The head-to-head that matters now is the one between the sudden and suspicious market darling and the secure alternative I just built to replace it.

OpenClaw Wirken
Channel isolation NONE. Single process, single token, all channels. Compile-time. Separate OS process per channel. Cross-channel access is a compiler error.
Blast radius NONE. Everything. Infinite. One channel.
Credential security NONE. 220,000+ exposed instances. XChaCha20-Poly1305 vault, OS keychain, per-channel scoped, rotating.
Credential leak prevention NONE. Compiler-enforced. SecretString: no Display, Debug, Serialize, Clone. Zeroed on drop.
Audit trail NONE. Append-only, SHA-256 hash-chained, pre-execution. SIEM forwarding.
Tamper detection NONE. Hash chain breaks on any modification.
Skill signing Optional. 824+ malicious skills (20% of registry). Ed25519 required.
Sandbox NONE. Docker or Wasm. Ephemeral, no-network, fuel-limited.
Inference privacy Provider DPA. Operator control: DPA, TEE, or local.
Dependency Node.js. No runtime dependencies.
OpenClaw skill.md parsing Native. Native.

A note on inference privacy: Tinfoil and Privatemode run open-source LLMs inside hardware TEEs (AMD SEV-SNP, Intel TDX, Nvidia H100/Blackwell CC). Hardware enclaves get broken by sophisticated side-channel. But the choice being offered is enclaves versus a provider who just promises not to look. A side-channel requires targeting and real effort. Reading prompts from a database requires nothing more than a query. TEEs are an option, they make exfiltration expensive instead of free. If you want zero cloud exposure, point Wirken at Ollama and keep everything local.

Stars measure clicks.

The table measures trust, in architecture.

Get Wirken: github.com/gebruder/wirken

Update March 20: Nvidia knows OpenClaw has a major problem. It released OpenShell to try and contain the flaws, but doesn’t fix the architecture inside it.

Amazon AI Gets Credit For Layoffs, Not the Outages It’s Causing

An AI agent at AWS autonomously chose to delete and recreate part of its environment. 13-hour outage. Amazon’s response: “user error”.

Amazon never misses a chance to point to “AI” when it is useful to them – like in the case of mass layoffs that are being framed as replacing engineers with AI. But when a slop generator is involved in an outage, suddenly that’s just “coincidence”.

The same company rapidly cutting 30,000 jobs and telling staff AI is replacing them? AI gets credit for the layoffs but not for the outages it causes.