Category Archives: History

Why Elon Musk Refuses to Deny He Made a Nazi Salute

Not denying because endorsing
Call the spade a spade
Elon Musk Nazism is dangerous

When video emerged of Elon Musk giving a Nazi salute at a political rally, his response was telling: He never denied it.

Never denied this, not even once. When falsely accused of making an obvious Nazi salute, most people’s immediate response would be “I absolutely did not do that.” Instead, Musk’s response was to spin it into a “dirty tricks campaign” that never actually denies doing it.

Elon Musk tweet about dirty tricks campaigns

Think about these tactics carefully. He didn’t say “I didn’t give a Nazi salute.” He didn’t say “That’s not what happened.” He certainly didn’t say “I stand opposed to racism and hate.” He attacked people daring to point out his Nazi salute, claiming he wants “better dirty tricks” from them.

This is straight from the Nazi propaganda playbook portraying their targets as dishonest and manipulative. When Hitler was tried for the Beer Hall Putsch in 1923, he didn’t deny trying to overthrow the government. Instead, he turned his trial into a platform to attack his accusers, claiming they were the threat to Germany instead of him.

Musk is playing an even more dangerous game. By dismissing Nazi comparisons as “sooo tired” while never denying his apparent Nazi salute, he’s sending a clear message: being called a Nazi is worse than actually behaving like one.

Notice another sleight of hand: he complains about “the everyone is Hitler attack” – yet nobody said “everyone.” They said Musk, specifically, made a Nazi salute. By pretending this is about “everyone” being called Hitler, he’s creating a straw man to discredit his critics while still never denying what he actually did. It’s deflection through exaggeration – make the accusation sound ridiculous by pretending it’s broader than it is.

This is how extremism gets normalized – not through outright endorsement, but through strategic non-denials turned into attacks. Attack those who point out extremist behavior, while letting the behavior itself slide as if what everyone sees isn’t real. It’s a form of winking acknowledgment to supporters while maintaining plausible deniability.

Even more disturbing is Musk’s specific choice of words. His repeated use of “dirty tricks” echoes classic Nazi antisemitic propaganda, which routinely relied on the German word for “dirty” (schmutzig) to dehumanize Jewish people. White supremacist hate groups typically promote the trope that Jews are involved in “dirty tricks” to control or subvert society for their own benefit, based in long-standing anti-Semitic stereotypes.

Thus Musk’s response wasn’t casual language – it was a deliberate propaganda tool to invoke Nazi themes about Jews being “unclean” or “impure.” When Musk calls for “better dirty tricks,” he’s not just refusing to deny his Nazi salute – he’s actively whistling Nazi-era antisemitic language while doing so.

Further historical echoes are impossible to ignore. After Kristallnacht in 1938, the Nazi leadership didn’t deny organizing the violence against Jewish citizens. Instead, they blamed the victims for “provoking” it. Don’t deny the action – just attack those who criticize it and claim victimhood.

When someone with Musk’s massive platform plays these games, the stakes become enormous. His claim about leaving the “kindness party” becomes even more sinister when paired with his use of Nazi-era antisemitic language. He’s not just switching political parties – he’s embracing and amplifying extremist rhetoric while playing the victim.

This is about more than one gesture or one tweet. It’s about more than years of evidence that Elon Musk promotes Nazism. It’s about recognizing how extremism spreads in the digital age. Not through outright statements, but through strategic non-denials and attacks on critics.

When influential figures refuse to deny their extremist actions and instead attack those who dare to point them out, they’re doing more than defending themselves – they’re normalizing the indefensible.

History shows us exactly where this leads. The only question is whether we’ll stop it in time to avoid the end of democracy.

UPDATE January 23, 2025: Two days after giving a Nazi salute and facing limited pushback, Musk moved from non-denial to open endorsement, posting a series of “jokes” using the names of Nazi leaders.

Elon Musk tweet with Nazi leader puns

Let’s be crystal clear: These aren’t just puns. This is Musk admitting it was a Nazi salute. He is literally mocking anyone who wasn’t sure he made a Nazi salute, laughing at them. Emboldened by insufficient resistance to his initial act, he’s now comfortable enough to openly promote light humor about genocidal Nazi leaders – Hess, Goebbels, Göring, and Himmler – to his 37 million viewers.

This is exactly how extremism advances: Test the waters with a Nazi salute. When the response is muted, escalate to openly referencing Nazi leaders. Test the door handle. If it’s unlocked, burst out laughing. His “bet you did nazi that coming” isn’t just a sad pun to draw viewers – it’s a boast. He’s saluting to militant extremist domestic terrorism cells, saying look how easy it was for him to be allowed by his confused targets to escalate from implicit to explicit Nazi messaging.

What started as “just don’t deny it” has within a day become “joke about it” and “laugh about it.” The progression is textbook: deny nothing, mock critics, then openly embrace Nazi ideology. Next comes racist violence disguised as “self defense” – a tactic perfected by “America First” movements from the 1800s through the 1900s. This is deeply American, not new. Fire bombing of Black Wall Street, coordinated state violence against labor unions, concentration camps for Japanese Americans, mass graves of indigenous peoples… Nazi “innovations” were actually imitations of American presidential policies under Jackson, Polk, and Wilson. America was more than a blueprint for Nazi Germany’s atrocities, as Hitler explicitly praised American race laws in “Mein Kampf” and told the world he would implement the anti-semitic violence Henry Ford encouraged. Now Musk, himself an illegal immigrant who exploited open borders to launder his family’s blood-stained apartheid fortunes, is cynically activating the most sinister meaning of MAGA’s “again”: the return to state-sanctioned racial terror.

Hitler was Austrian, not German. His background, like Musk’s South African one, demonstrates outsiders exploiting and amplifying existing nationalist extremism targeting… outsiders.

This is how it happens. This is how it’s happening.

Trump’s team failed to execute their first attempt, but they told us their Nazi playbook openly in 2016.

Like [President] Jackson’s [racist genocidal] populism, we’re going to build an entirely new political movement…. We’re just going to throw it up against the wall and see if it sticks. It will be as exciting as the 1930s.

When Bannon proclaimed they would build a movement like the 1930s while praising Jackson’s violent populism, he wasn’t referencing New Deal – a laughable claim given his consistent condemnation of liberalism as a decline into communism. No, he was explicitly signaling his hope for fascism’s rise, testing the waters just as Musk does now.

This pattern didn’t start with Musk, he’s merely the latest to perfect and amplify it: speak in code, gauge reaction, then escalate attacks. They’re accelerating far faster than 2016, learning from Hitler’s evolution from failed 1923 putsch and criminal charges to 1933 dictatorship. That’s why they are centralizing while deregulating everything immediately, for big tech to monopolize society in order to drive harms faster and deeper than their first attempt.

And we’re running out of time to stop it.

Update: A subsequent tweet perfectly illustrates the pattern. Rather than addressing concerns about Nazi symbolism, Musk deploys classic propaganda tactics by creating a false equivalence – labeling his critics as “radical leftists” who praise Hamas. The timing (3:37 AM) and massive reach (78.4M views) demonstrate a deliberate strategy to maximize exposure while making substantive discussion impossible.

This continues the progression the article has traced: from non-denial to mockery to attacking critics through inflammatory comparisons. By falsely pitting criticism of Nazi symbolism against support for Hamas, a totally false choice, the tweet creates an artificial conflict designed to seduce Jewish critics into defending Musk’s Nazi salute – a particularly insidious tactic given that both Hamas and Musk have documented histories of promoting Nazi ideology.

Nazi Germany was able to insinuate its exterminationist antisemitism into the Middle East and how that influence continues to poison Arab and especially Palestinian views of Israelis and Jews in general.

To stand against Musk giving his Nazi salute, let alone his copious dissemination of Nazi merch and symbolism over the years such as rebranding Twitter with a swastika, would therefore mean to also stand against Hamas. For him to say a stand against him is for Hamas is to setup a trap far too many Jews will fall into. This new tweet further normalizes extremist rhetoric through strategic deflection and plan for dangerous further escalation, all while avoiding any direct denial or accountability.

In 2023 Ai Wei Wei Called Out Elon Musk as a Nazi

July 2023 an artist famous for political commentary dropped his work on social media.

Source: Twitter

Ai Wei Wei’s artwork hit different from most – this was someone already who had been jailed and banned from Twitter for speaking truth to power.

Yet when he called out clear Nazi connections, there was no denial and barely a whisper of restriction (Elon Musk censored Wei Wei’s animated X by deleting it). The silence spoke volumes.

He’s particularly scathing about Elon Musk, who received multiple favours from the CCP to set up his Tesla factory in Shanghai and sings the praises of the Chinese government. Musk owns X, the platform that used to be Twitter, and Ai has on his phone an animation he created, the X spinning and turning into a swastika. It was deleted from X but was still available on Instagram. ‘It’s so creepy. I mean it looks so ugly,’ he said.

This artistic rendering of the X brand was deleted by self-promoting “free speech extremist” Elon Musk. Source: Ai Wei Wei

Fast forward to 2025 and the pattern is painfully clear. While some still debate whether to call a spade a spade, Musk has moved from dog whistles to bullhorns, now openly making Hitler salutes at political rallies that spark “we’re back” celebrations across social media.

“Maybe woke really is dead,” white nationalist Keith Woods posted on X.

“Did Elon Musk just Heil Hitler …” right-wing commentator Evan Kilgore posted on X. “We are so back.”

Today’s Nazi groups aren’t hiding anymore – they’re celebrating how their messages have gone mainstream, just as Ai Wei Wei warned us through his art years ago. The path from his Twitter critique to American political rallies is as straight as it is terrifying.

And here’s where history rhymes with a vengeance. Our languageexperts” stand in rising floodwaters, watching the dam crack, telling us to wait for “concrete evidence” of danger. By the time they admit the obvious, the flood will have already swept us all away.

“I’m skeptical it was on purpose,” said Jared Holt, a senior research analyst at the Institute for Strategic Dialogue, which tracks online hate. “It would be an act of self-sabotage that wouldn’t really make much sense at all.”

Self-sabotage doesn’t make sense? My Jared, it’s the very definition of Nazism. Do you not know history?

To understand Nazis is to understand self-destruction because it’s their entire endgame. Every time. The proof is found in history, as artists painting on Musk’s own factory have depicted so simply:

Elon Musk has been a frequent promoter of an AfD (Nazi) Party in Germany, sparking protests like this graffiti outside the Tesla factory.

This isn’t about academic caution – it’s about the deadly paralysis of overthinking while fascists build real power. They don’t need your perfect analysis. They just need your hesitation.

The march of fascism through Europe, leaving millions dead in its wake before World War II finally stopped it.

Remember: While scholars polish their dissertations on “the nature of rising authoritarianism,” extremists are seizing actual power. They don’t play by academic rules. They never have.

We’ve seen this playbook before. We know how it ends. When people say “we’re not Germany in the ’30s,” they’re reporting something today potentially even more dangerous — classic racist “show me your papers” mixed with modern technology that we need to call out as immoral and illegal. Millions of explosive racist killer robots descending on cities is not out of the question right now.

Teslas are known for their unexplained sudden “veered” crashes into people and infrastructure causing widespread suffering from intense chemical fires.

After all, the Nazis themselves studied and borrowed from American systemic and industrialized racism to build the European genocide machine. History doesn’t repeat, but it echoes – and right now, those echoes are extremely clear.

The only question is who among us will act on the warning signs this time.

Wei Wei was right.

Related: I coincidentally wrote on this blog two days after Wei Wei’s tweet, how the Twitter rebrand is a swastika on top of Tesla already being steeped in Nazi messaging.

Trump Repeals AI Innovation Rules, Declares No Limits for Big Tech to Hurt Americans

The Great AI Safety Rollback:
When History Rhymes with Catastrophe

The immediate and short-sighted repeal of AI oversight regulations threatens America with a return to some of the most costly historical mistakes: prioritizing quick profits over sustainable innovation.

Like the introduction of leaded gasoline in the 1920s, we’re watching in real-time as industry leaders push for unsafe deregulation that normalizes reckless behavior under the banner of innovation. What happens when AI systems analyzing sensitive data are no longer required to log their activities? When ‘proprietary algorithms‘ become a shield for manipulation? When the same companies selling AI tools are also controlling critical infrastructure?

The leaded gasoline parallel is stark because industry leaders actively suppressed research showing devastating health impacts for decades, all while claiming regulations would ‘stifle innovation.’ Now we face potentially graver risks with AI systems that could be deployed to influence everything from financial markets to allegedly rigged voting systems, with even less transparency. Are we prepared to detect large-scale coordination between supposedly independent AI systems? Can we afford to wait decades to discover what damage was done while oversight was dismantled?

Deregulation Kills Innovation

Want proof? Look no further than SpaceX – the poster child of deregulated “innovation.” In 2016, Elon Musk promised Mars colonies by 2022. In 2017, he promised Moon tourism by 2018. In 2019, he promised robotaxis by 2020. In 2020, he promised Mars cargo missions by 2022. Now it’s 2025 and SpaceX hasn’t delivered on any of these promises – not even close. Instead of Mars colonies, we got exploding rockets, failed launches, and orbital debris fields that threaten functioning satellites.

This isn’t innovation – it’s marketing masquerading as engineering. Reportedly SpaceX took proven 1960s rocket technology, rebranded it with flashy CGI videos and bold promises, then used public money and regulatory shortcuts to build an inferior version of what NASA achieved decades ago. Their much-hyped reusable rockets? They’re still losing them at an alarming rate. Their promised Mars missions? Apparently they haven’t even reached orbit yet without creating hazardous space debris and being grounded. Their “breakthrough” Starship? It’s years behind schedule and still exploding on launch.

Yet because deregulation has lowered the bar so far, SpaceX gets celebrated for achievements that would have been considered failures by 1960s standards. This same pattern of substituting marketing for engineering produced Cybertrucks unable to be exposed to water, increasingly in the news for unexplained deadly crashes.

Boeing’s 737 MAX disaster stands as another stark warning. As oversight weakened, Boeing didn’t innovate – they took deadly shortcuts that killed hundreds and vaporized billions in value. When marketing trumps engineering and systems get a similar free pass, we read about unmistakable tragedy more than any real triumph.

History teaches us that true innovation thrives not in the absence of oversight, but in the presence of clear, meaningful, measured standards especially related to safety from harm.

Consider how American scientific innovation operated under intense practical pressures for results in WWII. Early radar systems like the SCR-270 (which detected the Japanese at Pearl Harbor but was ignored) and MIT’s Rad Lab developments faced complex challenges with false echoes, ground clutter, and atmospheric interference.

The MIT Radiation Laboratory, established in October 1940, marked a crucial decision point – Vannevar Bush and Karl Compton insisted on civilian scientific oversight rather than pure military control, believing innovation required both rigorous standards and academic freedom. This approach led to the February 1940 cavity magnetron breakthrough by John Randall and Harry Boot that revolutionized radar capabilities. Innovations like the cavity magnetron and H2X ground-mapping radar demonstrated remarkable progress through regulations that enforced rigorous testing and iteration.

Contrast the success of heavily regulated outcomes in WWII with the vague approaches in the Vietnam War, such as Operation Igloo White (1967-1972) – burning $1.7 billion yearly on an opaque ‘electronic battlefield’ of seismic sensors (ADSID), acoustic detectors (ACOUSID), and infrared cameras monitored from Nakhon Phanom, Thailand. The system’s sophisticated IBM 360/65 computers processed thousands of sensor readings but couldn’t reliably distinguish between North Vietnamese supply convoys and local farming activity along the Ho Chi Minh Trail, leading to massive waste in random bombing missions. It was such a failure that President Nixon ordered the same system installed around the White House and on American borders. Why? He opposed regulations that made it clear the system didn’t work.

This mirrors today’s AI companies selling us a new generation of ‘automated intelligence’ – expensive systems making bold claims while struggling with basic contextual understanding, their limitations obscured behind proprietary metrics and classification barriers rather than being subjected to transparent, real-world validation.

Critics have said nothing proves this point better than the horrible results from Palantir – just as Igloo White generated endless bombing missions based on misidentified targets, Palantir’s systems have perpetuated endless cycles of conflict by generating flawed intelligence that creates more adversaries than it eliminates. Their algorithms, shielded from oversight by claims of national security, have reportedly misidentified targets and communities, creating the very threats they promised to prevent – a self-perpetuating cycle of algorithmic failure marketed as success: the self-licking ISIS-cream cone.

The sudden rushed push for AI deregulation is most likely to accelerate failures such as Palantir and lower the bar so far anything can be rebranded as success. By removing basic oversight requirements, we’re not unleashing innovation – we’re creating an environment where “breakthrough developments” require no real capability or safety, and may even be demonstrably worse than before.

Might as well legalize snake-oil.

The Real Cost of an American Leadfoot

The parallels with the tragic leaded gasoline saga are particularly alarming. In the 1920s, General Motors marketed tetraethyl lead as an innovative solution for engine knock. In reality, it was an extremely toxic shortcut as a coverup that avoided addressing fundamental engine design issues. The result? Fifty years of widespread lead pollution, untold human and animal suffering, that we’re still cleaning up today.

When GM pushed leaded gasoline, they funded fake studies, attacked critics as ‘anti-innovation,’ and claimed regulation would ‘kill the auto industry.’ It took scientists like Patterson and Needleman 50 years of blood samples, soil tests, and statistical evidence before executive orders could mature into meaningful enforcement – and by then, nearly irreversible massive damage was done. Now AI companies run the same playbook with a crucial difference. We need to scientifically define ‘AI manipulation’ before we can regulate it. We need updated ways to measure evolving influence operations despite no physical traces. Without executive level regulation requiring transparent logging and testing standards now, we’re not just delaying accountability – we’re ensuring manipulation will be undetectable by design.

Clair Patterson’s initial discoveries about lead contamination came in 1965, but it took until 1975 for the EPA to announce the phase-out, and until 1996 for the full ban. This was an intentionally corrupted 31-year gap between scientific evidence and regulatory action. The counter-campaign by the Ethyl Corporation (created by GM and Standard Oil) included attacking Patterson’s funding and trying to get him fired from Caltech.

While it took 31 years to ban leaded gasoline despite clear scientific evidence, today’s AI deregulation is happening virtually overnight – removing safeguards before we’ve even finished designing them. This isn’t just regression; it’s willful blindness to history.

Removing AI safety regulations doesn’t solve any of the fundamental challenges of developing reliable, useful and beneficial AI systems. Instead, it allows companies to regress towards shortcuts and crimes, potentially building fundamentally flawed systems unleashing harms that we’ll spend decades trying to recover from.

When we mistake the absence of standards for freedom to innovate, we enable our own decline – just as Japanese automakers dominated by focusing on quality (enforced under anti-fascist post-WWII Allied occupation) as American manufacturers oriented instead around marketing and took engineering shortcuts. Countries that maintain rigorous AI development standards ultimately will leap ahead of those that don’t.

W. Edwards Deming’s statistical quality control methods, introduced to Japan in 1950 through JUSE (Japanese Union of Scientists and Engineers), became mandatory under occupation reforms. Toyota’s implementation through the Toyota Production System (TPS) starting in 1948 under Taiichi Ohno proved how regulation could drive rather than stifle innovation – creating manufacturing processes so superior that American companies spent decades trying to catch up.

For AI to develop sustainably, just like any technology in history, we need to maintain safety standards that can’t be gamed or spun away from measured indicators. Proper regulatory frameworks reward genuine innovation rather than hype, the same way a good CEO rewards productive staff who achieve goals. Our development processes should be incentivized to build in safety from the ground up, with international standards and cooperation to establish meaningful benchmarks for progress.

False Choice is False

The choice between regulation and innovation is a false one. Its like saying choose between having a manager and figuring out what to work on. The real choice is between sustainable progress versus shortcuts that cost us dearly in the long run — penny wise, pound foolish. As we watch basic AI oversight being dismantled, we must ask ourselves: are we willing to repeat known mistakes of the past, or will we finally learn from them?

The elimination of basic oversight requirements creates an environment where:

  • Companies can claim “AI breakthroughs” based on vague probably misleading marketing rather than measurable results
  • Critical safety issues can be downplayed or ignored until they cause major problems and get treated as fait accompli
  • Technical debt accumulates as systems are deployed without proper safety architecture, ballooning maintenance overhead that slows or even stops innovation
  • America’s competitive position weakens as other nations develop more regulated and therefore sustainable approaches

True innovation doesn’t fear oversight – it thrives on it. The kind of breakthrough development that put America at the forefront of aviation, computing, and space exploration came from environments with clear standards and undeniable metrics of success.

The cost of getting this wrong isn’t just economic – it’s existential. We spent decades cleaning up the incredibly difficult aftermath of leaded gasoline that easily could have been avoided. We might spend far longer dealing with the privacy and integrity consequences of unsafe AI systems deployed in the current unhealthy rush for quick extraction of value.

The time to prevent this is now, before we create a mess that future generations will bear.

Today Elon Musk Gave the Hitler Salute: Twittler of the Digital Reich

Elon Musk used the unmistakable Hitlergruß “Sieg Heil” (Nazi) salute today at a political rally.

This Nazi salute is banned in many countries, including Germany, Austria, Slovakia, and the Czech Republic as a criminal offense. The gesture remains inextricably linked to the Holocaust, genocide, and crimes of Nazis. Such illegal use or mimicry of Nazi gestures continues to be a serious matter that can result in criminal charges due to their connection with hate speech and extremist ideologies.

Elon Musk’s calculated public display of Nazi symbolism has been a long road culminating in this “Sieg Heil” gesture on a political stage. And it represents a disturbing parallel to historical patterns of media manipulation and democratic erosion. The following analysis, based on this blog warning readers for years about Musk’s growing displays of Nazism, examines his very clear Nazi salute through a lens of historical scholarship on propaganda techniques and media control.

As noted by Ian Kershaw in “Hitler: A Biography” (2008), the Nazi seizure of control over German media infrastructure occurred with remarkable speed.

Within three months of Hitler’s appointment as Chancellor, the Reich Ministry of Public Enlightenment and Propaganda under Joseph Goebbels had established near-complete control over radio broadcasting. This mirrors the rapid transformation of Twitter following Musk’s acquisition, where content moderation policies were dramatically altered within a similar timeframe to promote Nazism.

Many people were baffled why American and Russian oligarchs would give Elon Musk so much money to buy an unprofitable platform and drive it towards extremist hate speech. Today we see that was simply political campaign tactics to destroy democracy. Of course it sunk money. Of course it was a business disaster. Does anyone really think calculating the value of a bomb dropped on democracy is only calculated by Russia in terms of the explosive materials lost on impact?

Copious reporting informs us how the Reich Broadcasting Corporation achieved dominance through both technological and editorial control:

To maximize influence, formerly independent broadcasters were combined under the policy of Gleichschaltung, or synchronization, which brought institutions in line with official policy points. Goebbels made no secret that “radio belongs to us.” The only two programs were national and local information. They began with the standard “Heil Hitler” greeting and gave plenty of airtime to Adolf Hitler.

This parallels the documented surge in hate speech on Twitter post-acquisition. Under the thumb of Elon Musk, the platform exploded with Nazism as researchers noted increased even in the first months. His response to those who cite evidence of this has been to angrily threaten those researchers and erect velvet ropes and paywalls. Staff remaining at Twitter who moderated speech or otherwise respected human life were quickly fired and replaced with vulnerable sycophants, just a few roles left designed to be mere cogs in a digital reich.

The Nazis understood that controlling the dominant communication technology of their era was crucial to reshaping public discourse, as Jeffrey Herf argues in “The Jewish Enemy” (2006). Radio represented a centralized broadcast medium that could reach millions simultaneously. Herf notes:

The radio became the voice of national unity, carefully orchestrated to create an impression of spontaneous popular consensus.

The parallel with social media platform control is striking. However, as media historian Victoria Carty observes in “Social Movements and New Technology” (2018), modern platforms present even greater risks due to:

  1. Algorithmic amplification capabilities
  2. Two-way interaction enabling coordinated harassment
  3. Global reach beyond national boundaries
  4. Data collection enabling targeted manipulation

The normalization of extremist imagery often comes within a shrewd pattern of “plausible deniability” through supposedly accidental or naive usage.

The 2018 incident of Melania Trump wearing a pith helmet – a potent symbol of colonial oppression – in Kenya provides an instructive parallel. Just as colonial symbols can be deployed with claims of ignorance about their historical significance, modern extremist gestures and symbols are often introduced through claims of misunderstanding or innocent intent.

So too Elon Musk denies understanding any symbolism or meaning to words and actions, while also regularly signaling he is the smartest man in any room. This contradiction is not accidental, as it supercharges the notion of normalization by someone who uses his false authority to promote Nazism.

Martin M. Winkler’s seminal work “The Roman Salute: Cinema, History, Ideology” (2009) provides crucial insight into how fascist gestures became normalized through media and entertainment. The “Roman salute,” which would later become the Nazi salute, was actually a modern invention popularized through theatrical productions and early cinema, demonstrating how mass media can legitimize and normalize extremist symbols by connecting them to an imagined historical tradition.

Winkler’s research shows how early films about ancient Rome created a fictional gesture that was later appropriated by fascist movements precisely because it had been pre-legitimized through popular culture. This historical precedent is particularly relevant when examining how social media can similarly normalize extremist symbols through repeated exposure and false claims of historical or cultural legitimacy.

Perhaps most concerning is Elon Musk’s pattern of normalization that emerges, right on cue. Richard Evans’ seminal work “The Coming of the Third Reich” (2003) details how public displays of extremist symbols followed a predictable progression:

  1. Initial testing of boundaries
  2. Claims of misunderstanding or innocent intent
  3. Gradual escalation
  4. Open displays once sufficient power is consolidated

The progression from Musk’s initial “jokes” and coded references (Tesla opens 88 charging stations, Tesla makes 88 kWh battery, Tesla recommends 88 K/h speed, Tesla offers 88 screen functions, Tesla promotes 88 ml shot cups, lightning bolt imagery… did you hear the dog whistles?) to rebranding Twitter with a swastika and giving open Nazi salutes follows this pattern with remarkable fidelity.

Modern democratic institutions face unique challenges in responding to these threats.

Unlike 1930s Germany, today’s media landscape is dominated by transnational corporations operating beyond traditional state control. As Hannah Arendt presciently noted in “The Origins of Totalitarianism” (1951), the vulnerability of democratic systems often lies in their inability to respond to threats that exploit their own mechanisms of openness and free discourse.

The key difference between historical radio control and modern social media manipulation lies in the speed and scale of impact, similar to how radio rapidly and completely displaced prior media. Hitler poured state money into making radios as cheap as possible to collapse barriers to his hateful, violent incitement propaganda spreading rapidly.

Yet radio still had reach constraints in physical infrastructure that could be managed and countered by state authorities. Social media platforms are on an Internet designed to route around such obstacles, which Russia bemoans as it clocks over 120 “national security” takedown notices sent every day to YouTube. Internet platforms can be transformed almost instantly through policy changes and algorithm adjustments both for and against democracy. This makes the current situation of extreme course change potentially even more dangerous than historical precedents. Information warfare long ago shifted from the musket to the cluster bomb, but defensive measures for democratic governments have been slow to emerge.

Source: Twitter

The parallel between Hitler’s exploitation of radio and Musk’s control of Twitter raises crucial questions about platform governance and democratic resilience. As political scientist Larry Diamond argues in “Democracy in Decline” (2016), social media platforms have become fundamental infrastructure for democratic discourse, making their governance a matter of urgent public concern.

The progression from platform acquisition to public displays of extremist symbols suggests that current regulatory frameworks are inadequate for protecting democratic institutions from technological manipulation. This indicates a need for new approaches to platform governance that can respond more effectively to rapid changes in ownership and policy.

But it is maybe too late for America, like Hearst realized in 1938 on Kristallnacht how it was too late for Germany and he never should have been promoting Nazism in his papers.

The historical parallels between 1930s media manipulation and current events are both striking and concerning. While the technological context has changed, the fundamental pattern of using media control to erode democratic norms remains consistent. The speed with which Twitter was transformed following its acquisition, culminating in its owner’s public display of Nazi gestures, suggests that modern democratic institutions may be even more vulnerable to such manipulation than their historical counterparts.

Of particular concern is how social media’s visual nature accelerates the normalization process that Winkler documented in early cinema. Just as early films helped legitimize what would become fascist gestures by presenting them as historical traditions, social media platforms can rapidly normalize extremist symbols through viral sharing and algorithmic amplification, often stripped of critical context or warnings.

Future research should focus on developing frameworks for platform governance (e.g. DoJ for laws, FCC for wireless) that can better protect democratic discourse while respecting fundamental rights. As history demonstrates, the window for effective response to such threats may be remarkably brief.