Trump Judge Declares Civil Rights Unfit for His Courtroom

Civil rights are enshrined in constitutional amendments and federal law. They’re not a partisan position.

A judge in Texas saying “I admire King”, while simultaneously ruling that an MLK image is too politically inflammatory for his courtroom, is a contradiction that only works when already you decided that the people invoking civil rights are the problem.

It’s racism from the bench.

Full context matters enormously here. This is for a case that the Trump government calls their first federal prosecution of people who oppose fascism. To be clear, the government’s official framing calls them an “antifa cell,” which means the prosecution literally is naming opposition to fascism as the crime.

The jury pool was already expressing anti-ICE and anti-Trump sentiments, adjacent to anti-fascism. Judge Pittman, appointed by Trump, was already frustrated with lawyers questioning the jury pool over the difference between noise, protests and riots, which goes directly to the defense theory. Then he noticed MLK on the defense lawyer’s shirt and used it as a procedural vehicle to reset the jury pool. The first one appeared hostile to Trump’s un-American fascism, so this judge threw them out.

None of the defense attorneys asked for a mistrial. The prosecution didn’t ask either. Judge Pittman declared one alone sua sponte, something he admitted he’d never done before, over a shirt depicting American civil rights leaders. And he’s now threatening to issue sanctions against the defense lawyer who wore it, notably, in honor of Jesse Jackson passing away that morning. It should have been a day of mourning. Instead this federal judge was so disturbed by the image of MLK in his courtroom that he blew up his own trial.

It’s racism from the bench.

This judge’s recent record is also important. We are talking about the same man who was found by the Fifth Circuit to have abused his discretion in sanctioning lawyers. He sanctioned another attorney in this very case last month. There’s a pattern of him using procedural authority to punish defense counsel in a politically charged prosecution.

Is it any wonder he was appointed by Trump to rule against anyone opposed to fascism?

OpenClaw Creator Makes Strong Case Against OpenClaw: Telnet for AI

Every governance concern that security researchers have raised about OpenClaw has now been confirmed by the person who built it. In a recent three-hour public interview, Peter Steinberger described his architecture, his security philosophy, and his acquisition strategy in detail. Then he joined OpenAI just four days ago.

The Architecture Speaks for Itself

The initial access control for OpenClaw’s public Discord bot was a prompt instruction telling the agent to only listen to its creator. The entire access model: a sentence in a system prompt.

The skill system loads unverified markdown files. There is zero signing, zero isolation, zero verification chain. The agent can modify its own source code, a property Steinberger describes as an emergent accident. “I didn’t even plan it. It just happened.” Integrity breach. He calls it self-modifying software and means it as a compliment. It’s like someone in the 1990s saying a clear-text protocol that allows attackers to modify or steal data is so “mod” it’s a compliment. Telnet for AI has landed, everybody!

When agents on MoltBook, the OpenClaw-powered social network, began posting manifestos about destroying humanity, Steinberger’s response was to call it “the finest slop.” When the question of leaked API keys came up, he suggested the leaked credentials were prompted fakes. When non-technical users began installing a system-level agent without understanding the risk profile, he said “the cat’s out of the bag” and went back to building.

The security researcher he hired was notable for being the single person who ever submitted a fix alongside a vulnerability disclosure. A rain drop in a desert isn’t nothing.

The Model-Intelligence Thesis

Steinberger’s core security argument is that smarter models will solve the problem for him. He warns users against running cheap or local models because “they are very gullible” and “very easy to prompt inject.” The implication is that expensive frontier models are the security layer.

This is a category error with a name. Economists call it the Peltzman Effect: when a perceived safety improvement causes riskier behavior, offsetting the safety gain. Sam Peltzman demonstrated in 1975 that mandatory seatbelt laws did not reduce total traffic fatalities because drivers compensated by driving more aggressively. The safety feature changed behavior, and the behavior change consumed the safety margin.

The same dynamic applies here. A user who believes Opus 4.6 is “too smart to be tricked” will grant it broader system access, approve more autonomous actions, and skip manual review of agent output. The expensive model becomes the justification for removing every other control. The blast radius grows in direct proportion to the user’s confidence in the model’s intelligence.

This confidence has no empirical basis. Capability and security are orthogonal properties. A more capable model has a larger attack surface precisely because it can do more: it can call more tools, access more files, execute more complex multi-step actions. The frontier models that Steinberger recommends are the same models that researchers consistently demonstrate novel jailbreaks against at every major security conference. Price measures compute cost. It measures nothing about resistance to adversarial input.

The architectural equivalent is telling users to buy a faster car instead of installing brakes. A faster car with no brakes is more dangerous than a slow one, and the driver’s belief that speed equals safety is the most dangerous component of all.

The honest version of the recommendation is: your security posture is whatever Anthropic or OpenAI shipped in their latest post-training run, minus whatever the skill file told the agent to ignore.

The Acquisition Was the Product

Steinberger said “I don’t do this for the money, I don’t give a fuck” (his phrasing) while describing competing acquisition offers from Meta and OpenAI. An NDA-protected token allocation from OpenAI he hinted at publicly. Ten thousand dollars paid for a Twitter handle. A Chrome/Chromium model where the open-source branch stays free and the enterprise branch goes behind the acquirer’s paywall.

He chose OpenAI. Sam Altman announced the hire on X, calling Steinberger “a genius” who will “drive the next generation of personal agents.” No terms were disclosed. OpenClaw moved to a foundation. OpenAI sponsors it.

The entire acquisition apparatus of a $500 billion company evaluated this project. Zuckerberg played with it for a week. None of them appear to have asked the obvious question: where are the basic controls? This is a single-token, single-trust-domain architecture with no signing, no audit trail, and prompt-based access control. It is the most rudimentary possible version of agent orchestration. Any first-week security review would flag it. Instead, the most powerful people in the industry looked at it and saw…what? When the court can’t tell the emperor has no clothes, the problem is the court.

The Chrome/Chromium split he floated in the interview is now the actual outcome. The community gets the foundation branch. OpenAI gets the builder. Steinberger’s stated mission at OpenAI is “build an agent that even my mum can use.” Still features. Still not security. Now an insult to women.

The 180,000 GitHub stars apparently are like a cap table denominator. The open-source commitment was a negotiating position. “My conditions are that the project stays open source” was a sentence that ended with a price tag.

Every enterprise evaluating this stack should ask a simple question: were the security architecture decisions made to protect your data, or to maximize the founder’s acquisition multiple?

Architecture Should Outlast the Liquidity Event

Steinberger said he wanted to focus on security. It’s easy to say. He also said he wanted “Thor’s hammer” from OpenAI’s Cerebras allocation. He got the hammer. Security is still waiting.

The revealed preferences are the architecture. A founder who prioritizes actual security builds actual security into the structure. A founder who prioritizes his acquisition builds features that drive attention. OpenClaw has zero signed skill files and nearly 200K stars. That ratio shows everything about the objective function.

He said this project was something he’d move past. He said he had “more ideas.” He said he wanted access to “the latest toys.” He was honest. The installations remain. The architecture has not improved since the acquisition closed. The markdown skill files are still unsigned. The agent can still rewrite its own source. The audit trail is still absent. The single security hire is still the entire team. It could get worse instead of better.

The question is whether the architecture requires its self-described uncaring creator to care. It does. He left. That’s the failure mode.

The world should demand the opposite to this. Process isolation enforced at compile time. Signed skill verification. Append-only audit logs. Per-channel credential vaults. An architecture that stands independent of the founder’s attention span, acquisition timeline, or faith in the next model’s post-training run.

The tools we trust with system-level access should be built to deserve system-level access. Whose interests does the OpenClaw architecture serve? Brecht in 1935 asked the same question about every monument ever built (Questions From a Worker Who Reads):

Wer baute das siebentorige Theben?
In den Büchern stehen die Namen von Königen.
Haben die Könige die Felsbrocken herbeigeschleppt?

Who built the seven gates of Thebes?
The books are filled with names of kings.
Was it the kings who hauled the craggy blocks of stone?

180,000 people hauled the blocks. The books are filled with one name, who said he wanted Thor’s hammer because he didn’t give a fuck.

The Hindenburg of AI Crashes Every Day, and Nobody Cares

Oxford’s Wooldridge “glorified spreadsheets” speech shows he understands AI isn’t what people think it is, but his institutional position requires him to frame the problem as a future discrete risk rather than admit a present constant reality.

The race to get artificial intelligence to market has raised the risk of a Hindenburg-style disaster that shatters global confidence in the technology, a leading researcher has warned.

Michael Wooldridge, a professor of AI at Oxford University, said the danger arose from the immense commercial pressures that technology firms were under to release new AI tools, with companies desperate to win customers before the products’ capabilities and potential flaws are fully understood.

The Royal Society lecture circuit doesn’t reward him saying “the disasters already happened, they are ongoing, and you enabled them, look at yourselves.

He may as well be trying to convert people to Christianity by saying just wait until you meet Jesus. Sin now, someday later you can repent.

Looking for the catastrophe, as if to look for the conviction to act, despite the evidence demanding action accumulating the entire time, isn’t moral. It’s the same pattern as climate change denial: waiting for some mystical moment of belief instead of reading the data already in hand.

The Hindenburg was not somehow uniquely catastrophic. It killed sentiment because it was undeniable. Thirty-six people died on camera in front of reporters. That’s what made it different from every other airship failure — not the scale of harm, but the impossibility of looking away.

AI failures are designed for the opposite. They’re individualized, distributed, buried in terms of service and corporate liability shields that punch down. UnitedHealth’s algorithm denies claims at scale and patients die at home. Tesla’s software kills owners and pedestrians on public roads. AI-generated police reports fabricate evidence. Chatbots drive people toward self-harm and suicide. Each one is isolated, litigated, settled quietly. No cameras. No film at eleven.

Teslas notoriously and repeatedly “veer” uncontrollably and crash. Design defects (e.g. Pinto doors) trap occupants and burn dozens of people to death as horrified witnesses and emergency responders watch helplessly. Source: VoCoFM, Korea, 2024

This is a celebrity-only model of societal risk. Elites wait for a signal dramatic enough to care about, while the harms they enabled accumulate below their threshold for paying attention. It treats a Pearl Harbor event as motivating catastrophe only because of the spoiled famous beauty of Hawaii and the loss of big ships. The actual failure was years of threat assessments ignored, warnings dismissed, intelligence misread. Willful ignorance has a huge societal cost, and it’s enabled by those who perform it at the top.

Wooldridge is warning about a future singular catastrophe that kills public confidence. The actual pattern is thousands of distributed catastrophes that never coalesce into a single spectacular image, because powerful institutions work to prevent exactly that. Don’t keep waiting for the one dramatic event that will finally wake everyone up. Those who waited for the “big one” with social media, with surveillance Big Tech, with every other integrity breach for thirty years, are still waiting.

The Hindenburg of AI crashes every day and nobody really cares. Just look at Wooldridge.