Failed White Ethnostate Was the Blueprint for Twitter Takeover

There’s a predictable path from Tesla’s killing-machines to Twitter’s destruction, one I warned about in 2016. That’s why I would say there’s crucial historical context missing from this late-to-the-party Atlantic article about Twitter’s transformation into an authoritarian platform. Here’s their seemingly provocative headline:

Musk’s Twitter Is the Blueprint for a MAGA Government: Fire everyone. Turn it into a personal political weapon. Let chaos reign.

Except, these warning signs were visible long before Twitter’s acquisition. In 2016 I presented a BSidesLV Keynote called “Great Disasters of Machine Learning,” analyzing how automated systems become tools of authoritarian control. The patterns were already clear in Tesla’s operations, showing striking parallels to historical examples of technological authoritarianism.

The Lesson of Rhodesia

Consider the history of a self-governing British colony that became an unrecognized state in southern Africa (now Zimbabwe), which has secretly been driving a lot of online trolls today. The abrupt collapse of Rhodesia stemmed from elitist minority rule systematically disenfranchising a majority population based on their race. When Ian Smith’s government unilaterally declared independence in 1965, it was presented as a “necessary” administrative action to maintain white “order” and white “efficiency” to prevent societal decay.

Sound familiar? As the Atlantic notes:

Musk’s argument for gutting Twitter was that the company was so overstaffed that it was running out of money and had only “four months to live.” Musk cut so close to the bone that there were genuine concerns among employees I spoke with at the time that the site might crash during big news events, or fall into a state of disrepair.

“Authorimation” Pattern Called Out in 2016

Great Disasters of Machine Learning: Predicting Titanic Events in Our Oceans of Math

My keynote presentation at the Las Vegas security conference highlighted three key warning signs that predicted this slide towards tech authoritarianism:

  1. Hiding and Rebranding Failures: Tesla’s nine-day delay in reporting a fatal autopilot crash—while vehicle parts were still being recovered weeks later—demonstrated how authoritarian systems conceal their failures. As the Atlantic observes about Twitter/X:

    Small-scale disruptions aside, the site has mostly functioned during elections, World Cups, Super Bowls, and world-historic news events. But Musk’s cuts have not spared the platform from deep financial hardship.

  2. Automated Unaccountability: I coined the term “authorimation” – authority through automation – to describe how tech platforms avoid accountability while maintaining control. The Atlantic notes this pattern continuing:

    Their silence on Musk’s clear bias coupled with their admiration for his activism suggest that what they really value is the way that Musk was able to seize a popular communication platform and turn it into something that they can control and wield against their political enemies.

  3. Technology as a Mask for Political Control: Just as Rhodesia’s government used administrative language to mask apartheid, today’s tech authoritarians use technical jargon to obscure power grabs. The Atlantic highlights this in Ramaswamy’s proposal:

    Ramaswamy was talking with Ezra Klein about the potential for tens of thousands of government workers to lose their job should Donald Trump be reelected. This would be a healthy development, he argued.

The “Killing Machine” Warning

My 2016 “killing machine” warning wasn’t just about Tesla’s vehicle safety—it revealed how automated systems amplify power imbalances while operators deny responsibility. Back then, discussing Tesla’s risks made people deeply uncomfortable, even as Musk himself repeatedly boasted “people will die” as a badge of honor.

Claims of “90% accuracy” in ML systems masked devastating failures, just as today’s “necessary” cuts conceal the systematic dismantling of democratic institutions. Musk reframed these failures as stepping stones toward his deceptively branded “Mars Technocracy” or “Occupy Mars”—a white nationalist state in technological disguise.

As the Atlantic concludes:

Trump, however, has made no effort to disguise the vindictive goals of his next administration and how he plans, in the words of the New York Times columnist Jamelle Bouie, to “merge the office of the presidency with himself” and “rebuild it as an instrument of his will, wielded for his friends and against his enemies.”

The fifteen years of Rhodesia’s “bush war” wasn’t a business failure any more than Twitter’s transformation is about efficiency. Labeling either as mere administrative or business challenges obscures the truth: these are calculated attempts to exploit unregulated technology, creating bureaucratic loopholes that enable authoritarian control while denying human costs.

Trust and Digital Ethics

Dismissing Twitter as a business failure echoes attempts to frame IKEA’s slave labor as simply an aggressive low-cost furniture strategy.

While it’s encouraging to see digital ethics finally entering mainstream discourse, some of us flagged these dangers when Musk first eyed Twitter—well after his “driverless” fraud immediately claimed lives in 2016… yet was cruelly allowed to continue the killing.

The more Tesla the more tragic death, unlike any other car brand. Without fraud, there would be no Tesla. Source: Tesladeaths.com

Now, finally, others are recognizing the national security threats lurking within “unicorn” technology companies funded by foreign adversaries (e.g. why I deleted my Facebook account in 2009). A stark warning about “big data” safety that I presented as “The Fourth V” at BSidesLV in 2012, has come true in the worst ways.

2024 U.S. Presidential election headlines indicate major integrity breaches in online platforms have been facilitating a rise of dangerous extremism

What have I more recently presented? I just met with a war history professor on why Tesla’s CEO accepts billions from Russia while amassing thousands of VBIED drones near Berlin. Perhaps academia will finally formalize the public safety warnings that some of us deep within the industry have raised for at least a decade.

WI Tesla Kills Five in “Veered” Crash

They apparently drove into a tree and couldn’t open the doors, a hallmark of the death trap design called Tesla.

Dane County officials released new information Monday about the five individuals killed Friday in a Town of Verona crash.

The Dane County Sheriff’s Office said the 2016 Model S Tesla included five passengers. They were described as:

55-year-old Belleville man
55-year-old Crandon woman
54-year-old Crandon man
48-year-old Brooklyn Park, Minn., woman
48-year-old Brooklyn Park, Minn., man

A known defect in driverless software is suspected to have veered suddenly off the road, and a known defect in door hardware is suspected to have prevented any escape from being burned alive.

The CEO of Tesla, who illegally immigrated to America using family wealth from South African apartheid, has ignored these deadly defects for years. Instead he has been focused entirely on overthrowing the American government to remove all public safety regulations.

Minnesotans should have known better than to go anywhere near a car made and operated remotely by this man.

Tesla Owner Confirms Critical Vulnerability in “Smart Summon”

The latest firmware update forced on Tesla owners, with no option to rollback, has a critical denial of service vulnerability. Here is my favorite part of this sad story:

Upon receiving Tesla firmware version v12.5.4.1, something broke and his car will no longer move.

This version of the software did make changes to both Actually Smart Summon and the old “Dumb” summon.

[…]

Now, he walks places instead of driving, though that that won’t remain an option as the Nova Scotian winter sets in.

He walks now instead of driving.

Biden’s AI Security Memo Needs a More Solid Technical Foundation

The National Security Memorandum (NSM) on AI from the Biden administration caught my attention – but not for the reasons you might think. As I wrote with two co-authors in a recent Fordham Law Review paper on federalizing data privacy infrastructure, AI governance requires a comprehensive national security strategy. While many of my peers seem focused on legal and compliance implications, I see a more crucial technical gap that needs addressing: data architecture and sovereignty.

The Security Elephant in the Room

Let’s be frank – we’re building AI systems on shaky ground. The NSM talks about “safe, secure, and trustworthy AI,” but as any security professional knows, you want to avoid bolting security onto a system late in development. We need to build it into the foundation, and that’s where I believe the W3C Solid standard for data wallets plays a transformative role.

Currently, our AI systems are like fortified castles built on quicksand. We’re focusing on securing algorithms and models while leaving fundamental questions about underlying data ownership, access, and control largely unaddressed. Have you tried to safely apply your Claude project artifacts into ChatGPT, for example, reliably detecting any loss of integrity or confidentiality? While the NSM’s designation of the AI Safety Institute as the primary industry contact is promising, without a standardized data architecture, we’re setting ourselves up for a security nightmare.

Why Solid Matters for AI Security

For those unfamiliar, Solid is a set of protocols and standards developed by the W3C since 2016 that enables true data owner sovereignty with greater transparency in processing. Think of it as the difference between having hundreds of different keys for hundreds of different locks versus having a single, secure master key system that logs every use. That’s what we’re dealing with in AI security right now – a mess of proprietary systems that don’t talk to each other.

Let me break this down with a real-world scenario. Imagine you’re trying to secure an AI system that processes customer data across multiple cloud providers. Currently, you’re juggling different authentication systems, piecing together audit trails, and hoping your access controls are properly configured across all systems. It’s a nightmare that keeps many of us up at night, given how few if any security vendors are ready to offer real AI breach solutions.

With Solid’s standardized approach, this all changes. Instead of proprietary authentication systems, you get a unified standard for data ownership that works everywhere – like bringing OAuth-level standardization to AI data access. Your audit trails become comprehensive and automated, not pieced together from different systems. And perhaps most importantly, data stays compartmentalized with granular permissions, so a breach in one area doesn’t compromise everything. Solid offers a whole new level of safe AI efficacy because of natural data integrity enhancements through ownership, with far less risk of privacy loss.

What the NSM Gets Right (And Where It Falls Short)

Reading through the NSM, I found myself nodding along with its emphasis on “mechanisms for risk management, evaluations, accountability, and transparency.” These are exactly the principles we need. The document shows a solid understanding of supply chain security for AI chips and makes competitor intelligence collection a priority – both crucial for our national security posture.

But here’s where it falls short: it’s missing the architectural foundation. While it talks about securing AI systems, it doesn’t address the fundamental need for a standardized data architecture. It’s like trying to secure a city without agreeing on how to build the roads, walls and gates. We need more than just guidelines – we need a common framework for how data moves and who controls it.

A Strategic Roadmap for Security Leaders

If you’re a CISO reading this, you’re probably wondering how to actually implement these ideas. I’ve been working with security teams on this transition, and here’s what the most effective approach looks like: Start with a pilot project in a controlled environment – perhaps your internal AI development platform. Use this to demonstrate how standardized data wallets can simplify access control while improving security posture.

Over the next six months, focus on building out the infrastructure for standardized authentication and data governance. This isn’t just about technology – it’s about establishing new processes that align with how AI actually uses data. You’ll find that many of your current security headaches around data access and audit trails simply disappear when you have a proper foundation.

The long-term vision should be a complete transition to wallet-based architecture for AI systems. Yes, it’s ambitious, but it’s also necessary. The CISOs I’ve talked to who consider this path find that it significantly reduces their attack surface while making compliance much more straightforward.

The Path Forward

The NSM is a step in the right direction, but as security leaders, we need to push for more concrete technical standards. Solid provides a ready-made framework that could address many of the security and privacy concerns the NSM raises.

My recommendation? Start experimenting with Solid now as a technical solution that brings huge efficiencies. Don’t wait for more regulations and costly cleanup of technical debt. The organizations that build their AI systems on a Solid foundation of data sovereignty will be better positioned to meet present and future security and compliance requirements.

Bottom line: AI security isn’t just about protecting models and algorithms – it’s about ensuring the entire data lifecycle is secure, traceable, and under proper control. The NSM gives us the “should do”; Solid gives us the “how to”.