In 2023 Ai Wei Wei Called Out Elon Musk as a Nazi

July 2023 an artist famous for political commentary dropped his work on social media.

Source: Twitter

Ai Wei Wei’s artwork hit different from most – this was someone already who had been jailed and banned from Twitter for speaking truth to power.

Yet when he called out clear Nazi connections, there was no denial and barely a whisper of restriction (Elon Musk censored Wei Wei’s animated X by deleting it). The silence spoke volumes.

He’s particularly scathing about Elon Musk, who received multiple favours from the CCP to set up his Tesla factory in Shanghai and sings the praises of the Chinese government. Musk owns X, the platform that used to be Twitter, and Ai has on his phone an animation he created, the X spinning and turning into a swastika. It was deleted from X but was still available on Instagram. ‘It’s so creepy. I mean it looks so ugly,’ he said.

This artistic rendering of the X brand was deleted by self-promoting “free speech extremist” Elon Musk. Source: Ai Wei Wei

Fast forward to 2025 and the pattern is painfully clear. While some still debate whether to call a spade a spade, Musk has moved from dog whistles to bullhorns, now openly making Hitler salutes at political rallies that spark “we’re back” celebrations across social media.

“Maybe woke really is dead,” white nationalist Keith Woods posted on X.

“Did Elon Musk just Heil Hitler …” right-wing commentator Evan Kilgore posted on X. “We are so back.”

Today’s Nazi groups aren’t hiding anymore – they’re celebrating how their messages have gone mainstream, just as Ai Wei Wei warned us through his art years ago. The path from his Twitter critique to American political rallies is as straight as it is terrifying.

And here’s where history rhymes with a vengeance. Our languageexperts” stand in rising floodwaters, watching the dam crack, telling us to wait for “concrete evidence” of danger. By the time they admit the obvious, the flood will have already swept us all away.

“I’m skeptical it was on purpose,” said Jared Holt, a senior research analyst at the Institute for Strategic Dialogue, which tracks online hate. “It would be an act of self-sabotage that wouldn’t really make much sense at all.”

Self-sabotage doesn’t make sense? My Jared, it’s the very definition of Nazism. Do you not know history?

To understand Nazis is to understand self-destruction because it’s their entire endgame. Every time. The proof is found in history, as artists painting on Musk’s own factory have depicted so simply:

Elon Musk has been a frequent promoter of an AfD (Nazi) Party in Germany, sparking protests like this graffiti outside the Tesla factory.

This isn’t about academic caution – it’s about the deadly paralysis of overthinking while fascists build real power. They don’t need your perfect analysis. They just need your hesitation.

The march of fascism through Europe, leaving millions dead in its wake before World War II finally stopped it.

Remember: While scholars polish their dissertations on “the nature of rising authoritarianism,” extremists are seizing actual power. They don’t play by academic rules. They never have.

We’ve seen this playbook before. We know how it ends. When people say “we’re not Germany in the ’30s,” they’re reporting something today potentially even more dangerous — classic racist “show me your papers” mixed with modern technology that we need to call out as immoral and illegal. Millions of explosive racist killer robots descending on cities is not out of the question right now.

Teslas are known for their unexplained sudden “veered” crashes into people and infrastructure causing widespread suffering from intense chemical fires.

After all, the Nazis themselves studied and borrowed from American systemic and industrialized racism to build the European genocide machine. History doesn’t repeat, but it echoes – and right now, those echoes are extremely clear.

The only question is who among us will act on the warning signs this time.

Wei Wei was right.

Related: I coincidentally wrote on this blog two days after Wei Wei’s tweet, how the Twitter rebrand is a swastika on top of Tesla already being steeped in Nazi messaging.

Silicon Valley VC Mourns Staff Killed by Tesla Robot

The operator of a Tesla robot on public streets told police he kept giving it the order to stop, but the known defective robot design instead launched itself like a SpaceX rocket into urban traffic, instantly killing at least one Silicon Valley star and his dog.

According to the DA, Zhang told investigators he tried stopping, but the brakes in his Tesla did not respond. Verifying that claim will take time.

“That requires a significant vehicle inspection, generally done by the manufacturer of the vehicle,” Jenkins said. “It requires accident reconstruction. We need to figure out how fast he was going and a download of the black box that was in the vehicle.”

Romanenko worked at venture capital firm Kleiner Perkins, where on Wednesday a colleague said they all called him Misha and shared the following statement: “Misha was a valued team member for his talent, dedication and collaborative spirit. He was not only a talented engineer, but also a wonderful person who will be greatly missed.”

No official statement is expected for the implicated deadly design defect, because Tesla shut down it’s public relations communication to remove liability for things said; public safety data related to this “unimaginable loss” of life will be treated as proprietary secrets.

A runaway Tesla robot killed Mikhael Romanenko, 27, in San Francisco last Sunday. Source: SF Chronicle

Instead, random robots on Twitter loosely affiliated with Tesla allegedly have begun usual astroturfed anti-regulation political campaigns. They push a notion there’s no way an American car company should have any responsibility for its known design defects that cause a robot to dangerously crash in the same manner as prior ones.

In fact, just six hours earlier on the same day in SF there was a very similar design defect crash as crucial foreshadowing.

What is the most likely explanation for these two very similar Tesla crashes in SF within hours (let alone two weeks ago in another downward highway exit scenario)?

A Tesla in SF crashed unexpectedly when braking into Golden Gate Park. December 31st, 2024. Source: SF Chronicle
A Tesla in SF crashed unexpectedly when braking into a parking spot. January 19, 2025. Source: Mission Local

Tesla’s “driverless” has a critical design failure.

While the first crash on Sunday was a low-speed maneuver into a building by its owner that totalled only their Tesla, the latter one exploded into a mass casually attack killing Mikhael Romanenko.

Consider how a vehicle-borne threat to public safety at high-speed descended the Interstate 280 off-ramp onto 6th Street and ran through a red light. Perhaps the robot disengaged abruptly without warning and woke its sleeping owner, hurtling him half-asleep into city streets? We don’t know these details yet, but we can say any last second attempts to stop a high-speed malfunctioning robot did exactly the same worst fail-unsafe design thing seen before month after month… now hour after hour.

By comparison, China last year apparently had enough with Tesla sock puppets and safety loophole games causing predictable tragedy. The government openly and loudly, with public safety regulations front of mind let alone national security, banned the Tesla design defect that very likely caused this new tragedy in San Francisco.

If you haven’t been living under a rock, you must be aware of the many Tesla crashes allegedly caused by a sudden unintended acceleration (SUA) issue. Tesla owners reported that the car suddenly accelerated, and no amount of pressure on the brake pedal could make the car stop. This developed into a full-blown hysteria in 2020-2022 when people in China started protesting over Tesla’s alleged “brake problems.”

[…]

After a few crashes in China, local regulators forced Tesla to change the one-pedal driving logic. In May 2023, Tesla issued an important update in China… regulators are now taking another step and will formally legislate… to have one-pedal driving forbidden in China by 2026.

I’ll say it again. China two years ago forced Tesla to issue a critical safety update. Doesn’t it seem directly related?

China’s move toward a baseline of quality in technology highlights how predictable harms are predictable, and how vulnerable Silicon Valley has become. The trope of Chinese getting ahead is in fact a repeat of the Japanese raising quality and safety in the 1970s as real innovation. American car companies chose to fall behind, digging themselves deeper into anti-regulation quagmires of deadly defects.

Silicon Valley’s resistance to regulation thus mirrors Detroit’s stance in the 1970s, right before the personal consequences hit home and we all know the Ford Pinto fires or the Lee IaCocca seat-belt and airbag stories right? The lesson that easily preventable and predictable deaths should be handled by regulators, not require journalist-driven national-level outrage and Presidential action?

The death of one of their own has Silicon Valley VCs confronting an uncomfortable reality: their crusade for deregulated “disruption” finally disrupted something more permanent than a market – it ended the life of a colleague. The same venture capital firms that championed Tesla’s ‘move fast and break things’ approach must now reckon with exactly what – and who – gets broken when building robots unsafe at any speed.

China (as well as Israel and elsewhere) has recognized such vehicles as potential weapons systems (let alone surveillance) requiring strict regulation, not just consumer products.

Reports say the Tesla robot with the latest unregulated AI capabilities reached nearly 100mph in two short blocks, ignoring all stop signals before slamming into stopped traffic — the logical result of a fail faster culture with zero regard for human lives deploying military-grade loitering munitions into dense civilian centers.

Tesla high-speed robots carrying high-explosive chemical cluster munitions have been stockpiled near capitol cities around the world. Source: Berlin, Germany. Sean Gallup (Getty Images)

UK Tesla Crashes Into Pole

It’s really quite odd how often a Tesla will crash into a pole. Right after I wrote up the story about nearly 3,000 people losing power in American sub-zero weather, this UK story popped up.

A driver has been taken to hospital after a collision in Worthing. Photos taken on Terringes Avenue show a badly damaged Tesla, which collided with a telegraph pole. Sussex Police said the ‘car v telegraph’ road traffic collision was reported about 11.35am.

Source: Sussex World

Telegraph? What century is this?

At least they didn’t say carriage.

Trump Repeals AI Innovation Rules, Declares No Limits for Big Tech to Hurt Americans

The Great AI Safety Rollback:
When History Rhymes with Catastrophe

The immediate and short-sighted repeal of AI oversight regulations threatens America with a return to some of the most costly historical mistakes: prioritizing quick profits over sustainable innovation.

Like the introduction of leaded gasoline in the 1920s, we’re watching in real-time as industry leaders push for unsafe deregulation that normalizes reckless behavior under the banner of innovation. What happens when AI systems analyzing sensitive data are no longer required to log their activities? When ‘proprietary algorithms‘ become a shield for manipulation? When the same companies selling AI tools are also controlling critical infrastructure?

The leaded gasoline parallel is stark because industry leaders actively suppressed research showing devastating health impacts for decades, all while claiming regulations would ‘stifle innovation.’ Now we face potentially graver risks with AI systems that could be deployed to influence everything from financial markets to allegedly rigged voting systems, with even less transparency. Are we prepared to detect large-scale coordination between supposedly independent AI systems? Can we afford to wait decades to discover what damage was done while oversight was dismantled?

Deregulation Kills Innovation

Want proof? Look no further than SpaceX – the poster child of deregulated “innovation.” In 2016, Elon Musk promised Mars colonies by 2022. In 2017, he promised Moon tourism by 2018. In 2019, he promised robotaxis by 2020. In 2020, he promised Mars cargo missions by 2022. Now it’s 2025 and SpaceX hasn’t delivered on any of these promises – not even close. Instead of Mars colonies, we got exploding rockets, failed launches, and orbital debris fields that threaten functioning satellites.

This isn’t innovation – it’s marketing masquerading as engineering. Reportedly SpaceX took proven 1960s rocket technology, rebranded it with flashy CGI videos and bold promises, then used public money and regulatory shortcuts to build an inferior version of what NASA achieved decades ago. Their much-hyped reusable rockets? They’re still losing them at an alarming rate. Their promised Mars missions? Apparently they haven’t even reached orbit yet without creating hazardous space debris and being grounded. Their “breakthrough” Starship? It’s years behind schedule and still exploding on launch.

Yet because deregulation has lowered the bar so far, SpaceX gets celebrated for achievements that would have been considered failures by 1960s standards. This same pattern of substituting marketing for engineering produced Cybertrucks unable to be exposed to water, increasingly in the news for unexplained deadly crashes.

Boeing’s 737 MAX disaster stands as another stark warning. As oversight weakened, Boeing didn’t innovate – they took deadly shortcuts that killed hundreds and vaporized billions in value. When marketing trumps engineering and systems get a similar free pass, we read about unmistakable tragedy more than any real triumph.

History teaches us that true innovation thrives not in the absence of oversight, but in the presence of clear, meaningful, measured standards especially related to safety from harm.

Consider how American scientific innovation operated under intense practical pressures for results in WWII. Early radar systems like the SCR-270 (which detected the Japanese at Pearl Harbor but was ignored) and MIT’s Rad Lab developments faced complex challenges with false echoes, ground clutter, and atmospheric interference.

The MIT Radiation Laboratory, established in October 1940, marked a crucial decision point – Vannevar Bush and Karl Compton insisted on civilian scientific oversight rather than pure military control, believing innovation required both rigorous standards and academic freedom. This approach led to the February 1940 cavity magnetron breakthrough by John Randall and Harry Boot that revolutionized radar capabilities. Innovations like the cavity magnetron and H2X ground-mapping radar demonstrated remarkable progress through regulations that enforced rigorous testing and iteration.

Contrast the success of heavily regulated outcomes in WWII with the vague approaches in the Vietnam War, such as Operation Igloo White (1967-1972) – burning $1.7 billion yearly on an opaque ‘electronic battlefield’ of seismic sensors (ADSID), acoustic detectors (ACOUSID), and infrared cameras monitored from Nakhon Phanom, Thailand. The system’s sophisticated IBM 360/65 computers processed thousands of sensor readings but couldn’t reliably distinguish between North Vietnamese supply convoys and local farming activity along the Ho Chi Minh Trail, leading to massive waste in random bombing missions. It was such a failure that President Nixon ordered the same system installed around the White House and on American borders. Why? He opposed regulations that made it clear the system didn’t work.

This mirrors today’s AI companies selling us a new generation of ‘automated intelligence’ – expensive systems making bold claims while struggling with basic contextual understanding, their limitations obscured behind proprietary metrics and classification barriers rather than being subjected to transparent, real-world validation.

Critics have said nothing proves this point better than the horrible results from Palantir – just as Igloo White generated endless bombing missions based on misidentified targets, Palantir’s systems have perpetuated endless cycles of conflict by generating flawed intelligence that creates more adversaries than it eliminates. Their algorithms, shielded from oversight by claims of national security, have reportedly misidentified targets and communities, creating the very threats they promised to prevent – a self-perpetuating cycle of algorithmic failure marketed as success: the self-licking ISIS-cream cone.

The sudden rushed push for AI deregulation is most likely to accelerate failures such as Palantir and lower the bar so far anything can be rebranded as success. By removing basic oversight requirements, we’re not unleashing innovation – we’re creating an environment where “breakthrough developments” require no real capability or safety, and may even be demonstrably worse than before.

Might as well legalize snake-oil.

The Real Cost of an American Leadfoot

The parallels with the tragic leaded gasoline saga are particularly alarming. In the 1920s, General Motors marketed tetraethyl lead as an innovative solution for engine knock. In reality, it was an extremely toxic shortcut as a coverup that avoided addressing fundamental engine design issues. The result? Fifty years of widespread lead pollution, untold human and animal suffering, that we’re still cleaning up today.

When GM pushed leaded gasoline, they funded fake studies, attacked critics as ‘anti-innovation,’ and claimed regulation would ‘kill the auto industry.’ It took scientists like Patterson and Needleman 50 years of blood samples, soil tests, and statistical evidence before executive orders could mature into meaningful enforcement – and by then, nearly irreversible massive damage was done. Now AI companies run the same playbook with a crucial difference. We need to scientifically define ‘AI manipulation’ before we can regulate it. We need updated ways to measure evolving influence operations despite no physical traces. Without executive level regulation requiring transparent logging and testing standards now, we’re not just delaying accountability – we’re ensuring manipulation will be undetectable by design.

Clair Patterson’s initial discoveries about lead contamination came in 1965, but it took until 1975 for the EPA to announce the phase-out, and until 1996 for the full ban. This was an intentionally corrupted 31-year gap between scientific evidence and regulatory action. The counter-campaign by the Ethyl Corporation (created by GM and Standard Oil) included attacking Patterson’s funding and trying to get him fired from Caltech.

While it took 31 years to ban leaded gasoline despite clear scientific evidence, today’s AI deregulation is happening virtually overnight – removing safeguards before we’ve even finished designing them. This isn’t just regression; it’s willful blindness to history.

Removing AI safety regulations doesn’t solve any of the fundamental challenges of developing reliable, useful and beneficial AI systems. Instead, it allows companies to regress towards shortcuts and crimes, potentially building fundamentally flawed systems unleashing harms that we’ll spend decades trying to recover from.

When we mistake the absence of standards for freedom to innovate, we enable our own decline – just as Japanese automakers dominated by focusing on quality (enforced under anti-fascist post-WWII Allied occupation) as American manufacturers oriented instead around marketing and took engineering shortcuts. Countries that maintain rigorous AI development standards ultimately will leap ahead of those that don’t.

W. Edwards Deming’s statistical quality control methods, introduced to Japan in 1950 through JUSE (Japanese Union of Scientists and Engineers), became mandatory under occupation reforms. Toyota’s implementation through the Toyota Production System (TPS) starting in 1948 under Taiichi Ohno proved how regulation could drive rather than stifle innovation – creating manufacturing processes so superior that American companies spent decades trying to catch up.

For AI to develop sustainably, just like any technology in history, we need to maintain safety standards that can’t be gamed or spun away from measured indicators. Proper regulatory frameworks reward genuine innovation rather than hype, the same way a good CEO rewards productive staff who achieve goals. Our development processes should be incentivized to build in safety from the ground up, with international standards and cooperation to establish meaningful benchmarks for progress.

False Choice is False

The choice between regulation and innovation is a false one. Its like saying choose between having a manager and figuring out what to work on. The real choice is between sustainable progress versus shortcuts that cost us dearly in the long run — penny wise, pound foolish. As we watch basic AI oversight being dismantled, we must ask ourselves: are we willing to repeat known mistakes of the past, or will we finally learn from them?

The elimination of basic oversight requirements creates an environment where:

  • Companies can claim “AI breakthroughs” based on vague probably misleading marketing rather than measurable results
  • Critical safety issues can be downplayed or ignored until they cause major problems and get treated as fait accompli
  • Technical debt accumulates as systems are deployed without proper safety architecture, ballooning maintenance overhead that slows or even stops innovation
  • America’s competitive position weakens as other nations develop more regulated and therefore sustainable approaches

True innovation doesn’t fear oversight – it thrives on it. The kind of breakthrough development that put America at the forefront of aviation, computing, and space exploration came from environments with clear standards and undeniable metrics of success.

The cost of getting this wrong isn’t just economic – it’s existential. We spent decades cleaning up the incredibly difficult aftermath of leaded gasoline that easily could have been avoided. We might spend far longer dealing with the privacy and integrity consequences of unsafe AI systems deployed in the current unhealthy rush for quick extraction of value.

The time to prevent this is now, before we create a mess that future generations will bear.