KKK and the Red Dragon of Canada

An odd footnote in history is how Canadian chapters of the KKK were recognized by their “Red Dragon” theme or even “Grand Ragon”, as explained at the University of Washington.

The Royal Riders were a Ku Klux Klan auxiliary for people who were “Anglo-Saxon” and English-speaking but not technically native-born American citizens.

While many Royal Riders chapters were in the United States, the KKK also organized chapters in Canada. Some of the earliest documented Klan organizing in Canada occurred in British Columbia in November, 1921, at roughly the same time that organizers first began working in Washington state.

The Royal Riders of the Red Robe was only nominally a separate organization from the Klan. It was listed in the Klan’s Pacific Northwest Domain Directory, shared an office with Seattle Klan Local 4, and had its meetings with similar rituals in the same places as the Seattle Klan. Beginning in 1923, Klan events and propaganda came to regularly feature Royal Rider initiations and news.

The Grand Ragon (as opposed to the Klan’s Grand Dragon) of the Pacific Northwest Realm of the Royal Riders of the Red Robe was J. Arthur Herdon, and the King County Ragon was Walter L. Fowler. Naturalized but not native-born citizens in Seattle’s Royal Riders were organized into another Klan Auxiliary, the American Krusaders, on October 18, 1923.

Solid TEE is the Antidote to Simon’s Lethal AI Trifecta

Solid is the Fix Everyone Needs for AI Agent Security

Following up on my recent analysis of the overhyped GitHub MCP “vulnerability”, I was reading Simon Willison’s excellent breakdown of what he calls the AI agent security trifecta.

Cleaning bourbon off my keyboard from laughing at Invariant Labs’ awful “critical vulnerability” report, made me a bit more cautious this time. While my immediate reaction was to roll my eyes at yet another case of basic misconfiguration being dressed up as a zero-day, Simon’s actually describing something fundamental – and more fixable – which opens the door to regulators to drop a hammer on the market, to drive the right kind of innovation.

Configuration Theatre

Let’s be clear about the “AI security vulnerabilities” exploding in the news for clicks: they are configuration issues wrapped with intentionally scary marketing language.

The GitHub MCP exploit I debunked? Classic example:

  1. Give your AI org-wide GitHub access (stupid)
  2. Mix public and private repos (normal)
  3. Let untrusted public content reach your agent (predictable)
  4. Disable security prompts (suicidal)

The “fix”?

Use GitHub’s fine-grained personal access tokens that have existed for years. Scope your permissions.

Dear Kings and Queens, don’t give the keys to your kingdom to a robotic fire-breathing dragon. Oh, the job of the court jester never gets old, unfortunately.

Ok, now let’s switch gears. Here’s where Simon gets it right: this isn’t really about any single vulnerability. It’s about the systemic impossibility in an ocean of unregulated code of maintaining perfect security hygiene across dozens of AI tools, each with their own configuration quirks, permission models, and interaction effects.

Security Theatre

The problem isn’t that we lack the tools to secure AI agents. No, no, we experts have plenty to sell you:

  • Fine-grained access tokens
  • Sandboxing and containerization
  • Network policies and firewalls
  • “AI guardrail” products (at premium prices, naturally)

The problem is another ancient and obvious one. We’re pushing liability onto users to make perfect security decisions about complex tool combinations, often without understanding the full attack surface. Even security professionals mess this up regularly, which means a juicy market both for exploitation by attackers and defenders. This favors wealthy elites and undermines society.

Don’t underestimate the sociopolitical situation that disruptive technology puts everyone in. When you install a new MCP server, which is fast becoming an expectation, you’re making implicit trust decisions:

  • What data can it access?
  • What external services can it call?
  • How does it interact with your other tools?
  • What happens when malicious content reaches it?

Misconfiguration means leaving the castle gates open and boom you just proved Simon’s lethal trifecta.

TEE Time With Solid: Secure-by-Default as the Actual AI Default

This is where the combination of a Trusted Execution Environment (TEE) and Solid data wallets gets interesting. Instead of relying on perfect human configuration, we can deploy a security model architecturally easy to configure safely and hard to misconfigure (fail safe). Imagine a process failing to run unless it has passed preflight checks, the same way a plane can’t take off without readiness assessments.

Cryptographic Access Control Slays the Configuration Dragon

Traditional approach: “Please remember to scope all your tokens scattered everywhere correctly and never give your AI agent access to repos that could be sensitive in the past or future.”

Impossible.

TEE + Solid approach: Your AI agent cryptographically cannot access data you haven’t explicitly granted it permission to use. The permissions are enforced at the hardware level, not the “please accept all fault even for things you can’t see or understand” level.

Traditional approach: Rely on application developers to implement proper sandboxing and hope they got it right or someone has enough money to start a lawsuit that might help then years from now.

TEE + Solid approach: The AI runs in a hardware-isolated environment where even the operating system can’t access its memory space. Malicious instructions can’t break out of a boundary that doesn’t rely on software for enforcement.

Traditional approach: Trust each vendor to implement security correctly and make a wish on a star that somehow someday their interests will align at least a little with yours.

TEE + Solid approach: Your data stays under your cryptographic control. The AI agent proves what it’s doing through attestation mechanisms. No vendor can access your data without your explicit, cryptographically verified consent.

Simon’s trifecta after TEE

The difference is remarkable when we’re talking about TEE time.

TEEs aren’t perfect, since everything has attack surfaces, yet they shift the security model from hoping users configure everything correctly to enforcing boundaries in more programmatic hardware layers.

Private Data Access: Instead of hoping developers scope permissions correctly, the TEE can only decrypt data you’ve explicitly authorized through your Solid wallet. Misconfiguration becomes cryptographically constrained.

Untrusted Content: Rather than trusting prompt filtering and “guardrails,” untrusted content gets processed in isolated contexts within the TEE. Even if malicious instructions succeed, they can’t access data from other contexts.

External Communication: Instead of hoping network policies are configured correctly, all outbound communication requires cryptographic signatures from your Solid identity. Data exfiltration requires explicit user consent that can’t be bypassed by clever prompts.

Ending the Hype Cycle

I’ve spent decades watching vendors try to return misconfiguration checks into pay-to-play profit for an elite few (e.g. how IBM used to require IBM to charge preflight checks and operations help).

Speaking of extractive business models, welcome to your robot taxi that en route asks you for five more dollars or its configuration will allow you to crash into a tree and be burned alive.

The AI space is in danger of falling into an extractive, toxic anti-competitive market playbook: identify a real problem, propose a proprietary monitoring layer to hedge as many customers in as possible, charge enterprise prices and build a moat for RICO-like rent seeking.

But TEE + Solid represents something different: an open standards based fundamental architectural shift that ends entire classes of vulnerabilities rather than just making them profit centers for those offering steps marginally harder to exploit.

This isn’t about buying better guardrails or more sophisticated monitoring. It’s about building systems where:

  1. Security properties are enforced by physics (hardware isolation) and mathematics (cryptography)
  2. Users maintain sovereign control over their data and AI interactions
  3. Default configurations are secure configurations because insecure configurations are impossible

It’s not any more difficult than saying TLS (transit) and AES (storage) are needed for safe information management. Turn it on: one, two, three.

Practical Path Forward is Practical

We have building blocks for TEE + Solid AI agents today. Why not try one? The hardware is already at scale, the protocols have standardization, and the user experience is all that needs serious work.

The foundation has been laid since around 2015 (MapReduce shout out), which means AMD, Intel, and ARM have all been pushing TEE capabilities into mainstream processors for almost a decade. The Solid protocol is the Web’s logical evolution for storage. Open-source AI models are good enough to run safely and locally in controlled environments.

When we put these pieces together, we won’t need to hail Mary anymore. Expecting a perfect play of security hygiene from users or perfect implementation from vendors is a myth for Disney profits on fantasy, not a daily life strategy. The security needs to be built into the architecture itself.

So scope your damn cryptographic tokens like always, read your efficient security prompts attached to reasonable boundaries, and maybe don’t give org-wide access to experimental AI tools. Let’s start thinking about how we build systems that don’t require belief in an ubermensch or superhuman.

Super vigilance from every user should be recognized as a form of being super villainous, an anti-pattern to building trusted systems.

My bourbon should be for celebrating solutions, not crying over fixing problems that never should have existed in the first place.

Why the Swiss Buried a Report Revealing EV Should Replace 90% of ICE

Swiss officials buried a taxpayer-funded study showing how ordinary citizens could save money and help the climate—not because the science was wrong, but because they’d been politically manipulated into fear about telling the truth.

The Smoking Diesel: A Study That Should Have Helped Everyone

Swiss taxpayers funded important research in 2022. Allocation of 118,000 francs went to the Federal Office of Energy to answer a practical question: When does switching to an electric vehicle (EV) save both money and emissions?

The answer would help families make one of their biggest financial decisions—buying a car—with complete information. Multiple peer-reviewed studies internationally had already established that electric vehicles save money over their lifetime for most drivers while significantly reducing emissions.

The Swiss study clearly confirmed the facts:

More than 90% of current gas and diesel car owners would reduce both costs and emissions by switching to an electric vehicle immediately—unless they barely drive at all.

Experts from the Swiss Touring Club, Paul Scherrer Institute, and other respected organizations validated the research. Mobility expert Romain Sacchi called the work “excellent” with “clear conclusions.”

This is exactly the kind of practical consumer guidance government should provide.

Instead… Swiss officials ran and hid, trying to bury the facts that would save lives and reduce costs.

Smoke Signals of Information Warfare

Internal emails obtained by journalists through freedom of information laws reveal what happened:

Officials had been fed a very dangerous toxic narrative that providing consumer guidance (doing their job) would harm the very people it’s meant to help. It’s like gaslighting a surgeon into believing life saving surgery would kill the person who will die without surgery. Or like telling the police not to file a crime report needed to help the victim because the victim might not like to read it.

When the study neared completion in December 2024, the project manager suddenly showed distress that the topic had been made “potentially sensitive.” The communications chief went further, expressing fears of thoughts being judged “too academic” (e.g. lack a profit motive).

The truth: Helping consumers understand the lifetime costs of major purchases is basic public service.

The manipulation: Bad actors had convinced officials that providing this service was somehow condescending to citizens.

The reality: Withholding cost-saving information from taxpayers who funded it is what actually disrespects citizens.

When Public Servants Are Terrorized to Stop Serving the Public

The emails reveal a cascading panic among officials who should have been proud to share helpful research:

What actually helps families: Clear information about transportation costs and environmental impact.

What officials feared: Being accused by “certain media” of political manipulation.

What really happened: Officials were manipulated by far right extremists into hiding helpful information.

When journalists requested the study under freedom of information laws, the scramble intensified. Officials explored various deceptions:

  • Falsely claiming the completed study wasn’t finished
  • Retroactively renaming the “final report” as “interim”
  • Inventing new requirements to justify suppression

Only after exhausting these options did they inform the minister’s office. Leadership ultimately released the study to journalists but refused to publish it officially.

Real Harm to Swiss Democracy

Democratic governments exist to serve citizens with accurate information. When Switzerland’s transport sector produces the most greenhouse gases and remains furthest from climate targets, citizens deserve facts about their options.

Instead, a disinformation campaign achieved its goal:

  • Thousands of Swiss families made car-buying decisions without complete information
  • Switzerland missed its 2025 goal of 50% electric vehicle adoption (achieving only 30%)
  • Public servants learned that doing their jobs invites punishment

The truth about who this serves: Withholding consumer information benefits industries that profit from uninformed decisions. Disabling experts and public servants serves fascism.

The false narrative: That sharing price comparisons somehow harms consumers.

The reality: Consumers are harmed when they lack information needed for major financial decisions.

Understanding the Manipulation Playbook

The following patterns seem to be consistent across topics and countries.

Healthcare

Reality: Vaccines prevent disease and save lives, as shown by centuries of evidence.

The disinformation: Bad actors falsely label this scientific consensus as “Big Pharma propaganda.”

The result: Public health officials become afraid to share life-saving information.

Education

Reality: Comprehensive education improves life outcomes for all students.

The disinformation: Bad actors call evidence-based apolitical curricula the opposite, such as “indoctrination.”

The result: Educators fear educating despite well established facts.

Economics

Reality: Progressive taxation and social programs reduce inequality and improve social mobility.

The disinformation: Bad actors dismiss economic research with practical proofs as “socialist theorizing.”

The result: Policymakers fear implementing policies that demonstrably help most citizens.

Swiss Democracy in Retreat

Switzerland built its prosperity on pragmatic, evidence-based decision-making. The country’s direct democracy depends on informed citizens making collective choices.

What democracy requires: Citizens with access to relevant information making decisions based on facts.

What actually happened: Officials hid facts because they’d absorbed narratives designed to prevent evidence-based policy.

The outcome: Swiss voters deciding on energy and climate policies without access to relevant research their taxes funded.

The bitter irony: Officials thought they were avoiding controversy. Instead, they created a democratic crisis where public servants fear serving the public.

Respect vs. Saccharin

Actually respecting citizens:

  • Trusting them with factual information about major purchases
  • Sharing research they funded about topics affecting their lives
  • Believing people can understand cost comparisons and make informed choices
  • Providing consumer guidance as a basic public service

Actually toxic to citizenship:

  • Deciding they can’t handle straightforward information
  • Hiding research because you assume they’ll misunderstand
  • Withholding data that could save them money
  • Treating taxpayers like children who need protection from facts

Protecting Democratic Information Sharing

Breaking this cycle requires recognizing how disinformation campaigns manipulate public servants into betraying public service:

Officials: Your duty is providing citizens with accurate information. When you hide helpful research, you’re not avoiding elitism—you’re practicing it.

Leaders: Defend civil servants who share evidence-based information. Make clear that taxpayer-funded research belongs to taxpayers.

Media: Report on information suppression as democratic failure. Don’t amplify narratives designed to prevent evidence-based policy.

Citizens: Demand access to research you funded. Support officials who prioritize transparency. Recognize when “controversy” is manufactured to hide helpful information.

Threats to Every Democracy

Switzerland’s buried electric vehicle study forces us to confront fundamental questions about information warfare in our midst.

What is government for?

If not to help citizens make informed decisions with accurate information, then what?

Who benefits from ignorance?

When public servants fear sharing consumer guidance, who profits from uninformed purchasing decisions?

What does democracy mean?

Can it function when officials hide information because they’ve internalized anti-democratic narratives?

What Swiss Officials Tried to Hide, to Appease Anti-Government Elites

  • Electric vehicles have lower lifetime costs than gas cars for most drivers
  • The climate benefit of switching is immediate and substantial
  • These findings align with international research and basic physics
  • This information could help families save thousands of francs

This is helpful consumer information. There’s nothing controversial about helping people save money while supporting energy independence.

The controversy was manufactured by those who benefit when consumers lack information. Swiss officials fell for it, choosing institutional comfort over public service.

The real scandal

Public servants so paralyzed by false narratives that they forget their job is serving the public with facts.

The Swiss case isn’t unique. Across democracies, public servants increasingly fear that doing their jobs—providing helpful information to citizens—will bring punishment.

This fear doesn’t arise naturally. It’s cultivated by campaigns designed to prevent evidence-based policy by making evidence itself seem dangerous.

The simple antidote

Recognize that democratic governments exist to help citizens make informed decisions. Any narrative suggesting that informed citizens are bad for democracy is itself anti-democratic.

When officials hide helpful information, they don’t avoid elitism—they embody it. When they share facts that help people save money and protect their children’s future, they practice democratic respect.

The choice is clear. The question is whether democratic institutions will remember their purpose and remove toxic right wing extremists like removing lead from gasoline, or upgrading from petroleum altogether.


An investigation by Republik and the WAV research collective first uncovered how false narratives about “elitism” led Swiss officials to hide helpful consumer information. Thanks to freedom of information laws, the study “Purchase Decision: When It Pays to Switch to an Electric Car” is now publicly available.

The case demonstrates why transparency laws and investigative journalism remain essential for democratic accountability—especially when public servants forget they serve the public.