TSLA Investors Attacking Journalist Accidentally Reveal Autopilot Fraud

Here’s a truly sad conclusion to a story about Tesla Autopilot running over child mannequins, even behind a wall, failing a simple safety test that LiDAR-enabled cars pass with ease.

March 2025 a Tesla autopilot still blind to objects and humans in the road. Arguably it has only gotten worse as the company intentionally removed critical safety equipment, slashing costs despite known risks to life and property.

An Electrek journalist had clearly reported what we’ve all seen. Then the TSLA attack dogs started coming for him to argue that Autopilot wasn’t enabled. So, as any journalist would, he looked more carefully and realized… Autopilot had fraudulently disengaged itself. Ooops.

The funny thing is that I missed that Autopilot disengaged at the last second, but the attacks from Tesla investors pointed it out and actually exposed video evidence of a shady practice from Tesla that has been reported in the past.

In NHTSA’s investigation of Tesla vehicles on Autopilot crashing into emergency vehicles on the highway, the safety agency found that Autopilot would disengage within less than one second prior to impact on average in the crashes that it was investigating…

This would suggest that the ADAS system detected the collision but too late and disengaged the system instead of applying the brakes. Now, it looks like the Rober video has caught this behavior on camera.

Busted.

The Helsing Drone Revolution, While South Africans Dismember U.S. Defense

Listen up, everyone. While we’ve had to watch as two very South African spoiled brats loudly take over the American defense industry to set piles of money on fire and throw Hitler salutes (thing one and thing two), the Europeans quietly and professionally have been building something that works. And works damn well.

Remember when we would say “quantity has a quality all its own” to explain success of Sherman tank swarms from North Africa all the way to liberating Polish death camps? Well, the Germans at Helsing just put that into practice in 2025. Their Resilience Factory isn’t just a damn impressive building – it’s a statement of intent.

Resilience Factories are Helsing’s high-efficiency production facilities designed to provide nation states with local and sovereign manufacturing capacities. Helsing is set to build Resilience Factories across the European continent, with the ability to scale manufacturing rates to tens of thousands of units in case of a conflict. The first Resilience Factory (RF-1) is operational in Southern Germany and has an initial monthly production capacity of more than 1,000 HX-2.

One thousand drones per month to start

Let that sink in.

Then imagine what happens when they hit their stride across Europe. As my old man used to say while spitting sunflower seeds off the porch, “future arrives when you’re looking the other way.” And my fellow security industry experts, America has been looking the wrong way.

The factory pumping out the Helsing HX-2 is the real deal. An X-wing to fight the evil oligarchs? Yes, I know how that sounds, but sometimes life imitates art for a reason.

Artist rendering of an Helsing HX-2 in flight
100 kilometers of range on battery power alone. That’s not just impressive, that’s game-changing.

We’re talking one operator, multiple drones in a swarm, and electronic warfare resistance built right in (ask me about Russian container ships weaponized by Chinese hackers).

A catastrophic demonstration of information warfare: The Solong container ship’s unnatural trajectory into a U.S. military oil tanker bears all the hallmarks of sophisticated navigation system compromise.

You know what our man Patton would have said about this? “Fixed fortifications are monuments to man’s stupidity.” The Helsing is producing fixed wings, which are very much not fixed in time or space. They aren’t even manned, because women make great pilots if you catch my drift. These are the harbingers of a new kind of European defense posture shifting into fifth gear, while “America First” means defense going in reverse (ask me about Elon Musk funding AfD to be a new Nazi platform in Brandenburg, where he built his Nazi factory of Swasticars).

The Tesla Factory lit up at night in a prominent Nazi (AfD) area known as Brandenburg, outside Berlin, Germany

I mean the Pentagon clearly has been writing checks that some Silicon Valley techbros cash without any real intent to deliver outcomes (ask me about Henry Ford taking millions and never delivering a damn thing in WWI, leaving Allies high and dry because he favored the Germans).

The Ford Motor Co., according to the War Department, received from Wilson’s administration $249,000 for tools which were never delivered. I suppose Henry has them yet. He also has the money, unless he spent it on this election. The Ford Motor Co., for tractors: Number delivered, none. Amount paid, $1,299,000. Where are those tractors? They might be converted into golden chariots, for all I know. The Ford Motor Co., for spare parts: Number delivered, none. Amount paid, $5,517,000.

Boy oh boy, that Henry Ford was a real Elon Musk, wasn’t he?

American autoworkers and their children in 1941 protest Ford’s relationship with Hitler. Source: Wayne State

Helsing is on the right side of history and has been quietly building the future because of their past. As in building, not bloviating. Ukraine is the obvious proving ground, and so far, we’re seeing proof positive.

How many of the South African fever-dream Cybertrucks have made it into battle, let alone survived their first hour in operation? Oh, right, Russia couldn’t get even two of their million-dollar Tesla fluffy battle bots rolling into the field without immediate device failure.

Tesla very deniably supplied a Chechen warlord with Cybertrucks to test, which immediately catastropically failed… like they couldn’t even move into position, resulting in more ugly finger pointing than a shirtless Russian dictator on a horse.

As the great Yogi Berra might put it: “The future of warfare ain’t what it used to be.” Tesla drone swarm lobbying now makes about as much sense as the State Department blowing $400 million on Cybertrucks after Elon Musk tampered with government procurement documents using the promise his drones can float… all the way to Mars.


These screenshots from three versions of a State Department procurement document that was posted online show how the plans to procure armored Teslas morphed over time. The State Department says the plans to purchase $400 million of armored Teslas originated with the Biden administration, but NPR’s reporting shows only that the Biden administration planned to spend less than $500,000 to explore whether electric vehicles could be armored for diplomatic use.
The CEO said his magic boyhood idea of a bulletproof vehicle even floats like a duck. Every sorry owner of a ruined Cybertruck has since found out that water is, in fact, wet. Source: edhat

Good luck running a navy that has nothing to show for its excessive corruption but a fleet of flammable boat anchors. Talk about a salute to Hitler.

Swasticars: Remote-controlled explosive devices stockpiled by Musk for deployment into major cities around the world. Real picture of a real munition stockpile being stationed just outside Berlin for attack, on Putin’s command.

Either adapt to this new reality of Helsing delivering the future of drone warfare, or become as relevant as Tesla kitchen knives made from “magic Mars dust” being sold as a high-dollar defense against reinforced water tanks. History repeats in the least amusing ways.

The choice is upon the world. The clock is ticking. America has embarrassed and degraded itself with Hegseth, Thiel, Musk… being hollowed out with racist snakeoil salesmen on showboat platforms of tech fantasy. Meanwhile, eyes should have been on Helsing the whole time, a company not waiting for permission to innovate in the necessary defense of Ukraine.

Helsing: Real factories, real drones, real future of warfare. Since the Nazis were defeated in Germany and prevented from rising again, the Germans today no longer suck at basic engineering principles like those nutsy South Africans. Real history fact: Nazi Germany was the least technologically advanced of WWII, far behind and broken, yet they always talked like damn fools about being ahead.

Remember what the always honest and sincere legendary General Creighton Abrams said when routing the Nazis: “They’ve got us surrounded again, the poor bastards.”

26th December 1944 Commanding 37th Tank Battalion, CCR, 4th Armoured Division, Lt. Colonel Abrams requested he be allowed to dash his Sherman tanks through Assenois to breach German defenses and reach Bastogne to relieve the 101st Airborne, which had just replied “Nuts” to Nazis demanding surrender. Adams was right, and for this Third US Army Commander, General George S. Patton called him the “world champion” tank commander.

That’s like Helsing revolutionizing drone warfare while surrounded by backwards-thinking South Africans working for the backwards-thinking Russians to make America go backwards again (MAGbA). The Americans appear completely outmatched now in the defense industry, and will fall even further behind the more they let those two backwards-thinking chaotic things from South Africa have any say.

Who let Peter Thiel and Elon Musk out of the box? Who thought it would be ok? They should not have been let out, said the fish. Will someone lock them up? Cat in the hat where are you?

New Dodge Charger EV Whips the Challenger Hellcat Redeye

The legend
Lord knows if anyone thought a Hellcat Redeye guzzler could fend off a new electric variant. Of course an EV performance package on the Charger has a better result at face value.

Let’s look at the data. A Charger EV’s performance metrics reveal some fundamental engineering signals typical of big battery upgrades to old dirty burners. At 5,925 pounds, this vehicle clocks in some mass inefficiency. That beefy three ton design for a two-door car is objectively weird from a systems perspective.

Despite big car weight inefficiency, the 670hp electric drivetrain coupled with AWD achieves 0-60 in 3.3s compared with the Hellcat’s slippy rear-wheel 3.6s. This delta is expected given better traction coupled with electric motor advantages (peak torque at 0 RPM) versus an always disappointing ICE torque curve dependency.

What’s telling is the two converge at 100mph (8.0s vs 7.8s) and quarter-mile trap speeds (119mph vs 125mph), demonstrating battery-electric has a designed performance curve under sustained load. The 136mph top speed limitation further confirms power delivery designs of the current battery architecture.

The braking performance (151ft from 70mph) is adequate given its mass, but “seesawing” behavior and “excessive understeer” during skidpad testing sounds like some suboptimal weight distribution and chassis tuning. That means significant security concerns in emergency avoidance scenarios.

All in all, the Hellcat is yesterday’s lettuce. Nonetheless the Charger EV simply beats it, without flourish, and could have done much better… given where performance norms are at these days for new cars. I had expected a sub 3s performance, maybe even approaching sub 2s. And better handling. Low and middle center of gravity should be leveraged into a handling upgrade.

Rogue AI in US Gov Fires Off Yes/No “Are You Communist” Email to UN Leaders

Welcome to the Stupidity of AI-Powered Policy: When Governance is Reduced to One-Move Chess

Send it!

A profound shift in American governance has been signaled by three recent “AI” developments in the news.

First, the BBC says that United Nations aid agencies have received a dubious 36-question form from the US Office of Management and Budget asking if they harbor “anti-American” beliefs or communist affiliations. That in itself should be proof enough that AI systems are totally incapable of preventing themselves from making an accidental launch of nuclear missiles.

Second, the Atlantic tells us how Department of Government Efficiency (DOGE) appears to be rapidly implementing AI systems in federal agencies despite significant concerns about their readiness, with plans to replace human workers with incompetent-robot operators at the General Services Administration (GSA). This is much in the same as Tesla initially boasting it would replace all workers with robots, which failed horribly and caused a rapid roll-back in disaster mode.

Third, Axios reports that the State Department is expecting AI to assess social media accounts of student visa holders as if it can identify and revoke rights of those who appear to support ideas or groups designated as terrorist.

This all comes as Facebook, just as one obvious example, has said content generation and moderation is a bust because of unavoidable integrity breaches in automated communications systems.

Zuckerberg acknowledged more harmful content will appear on the platforms now.

The “best” attempts by Facebook (notably started by someone accused at Harvard of making no effort at all to avoid harm) have been just wrong, like laughably wrong and in the worst ways, such that they can’t be taken seriously.

This week [in 2021] we reported the unsurprising—but somehow still shocking—news that Facebook continues to push anti-vax groups in its group recommendations despite its two separate promises to remove false claims about vaccines and to stop recommending health groups altogether.

Foreshadowing clumsy and toxic American social media platforms in 2025, Indian troops in the Egyptian desert get a laugh from one of the leaflets which Field Marshal Erwin Rommel has taken to dropping behind the British lines after his 1942 ground attacks failed. The leaflet, which of course were strongly anti-British in tone, were printed in Hindustani, but far too crude to be effective. (Photo was flashed to New York from Cairo by radio. Credit: ACME Radio Photo)

However, despite the best engineers warning AI technology is unsafe and unable to deliver safe communications without human expertise, we see the three parallel developments above are not isolated policy shifts.

They appear to be lazy, rushed, careless initiatives that represent a fundamental transformation in governance from thoughtful outcome-oriented service to an unaccountable extract-and-run gambit. It’s a shift from career public servants making things work through a concentration of significant effort, to privileged disruptive newcomers feeling entitled to rapid returns without any idea of what they are even asking. The contextless, memory-less nature of both the latest AI systems and certain rushed anti-human leadership styles are now upon us.

The One-Move-at-a-Time Problem in International Relations

When powerful AI systems are deployed in policy contexts without proper human oversight, governance begins to resemble what international relations theorist Robert Jervis would call a “perceptual mismatch” and actors will fail to understand the complex interdependence that shapes the global system.

It becomes a game of chess played one move at a time, with no strategy beyond the immediate decision other than selfish gains.

There is a query [to the UN] about projects which might affect “efforts to strengthen US supply chains or secure rare earth minerals”.

This is the worst possible way to play on the world stage, revealing evidence of an inability to think, learn, adapt or improve. America looks sloppy and greedy, a kind of desperation for wealth extraction, like a 1960s dictatorship.

A Tofflerian Acceleration Crisis

Alvin Toffler, in his seminal work “Future Shock” (1970), warned about the psychological state of individuals and entire societies suffering from “too much change in too short a period of time.” What Toffler was warning us about is how AI-driven governance would accelerate our political systems in ways that would frighten the anti-science communities into a panic. The domain shift opens a vacuum of trust that we might call “policy shock”, enabling “strong men” (snakeoil) decisions made in spite of (ignoring) historical context, by removing consideration of second-order effects.

We go from a line with points on it to no lines at all, just a bunch of points.

The UN questionnaire perfectly embodies this anti-science acceleration crisis: complex geopolitical relationships developed over decades since World War II reduced to thoughtless binary questions, processed in a flawed algorithmic rush to check unaccountable lists rather than an intelligent/diplomatic pace for measured outcomes.

Similarly, the GSA’s rapid deployment of AI chatbots, conceived as an experimental “sandbox” under the previous administration, are being fast-tracked as a productivity tool amid mass layoffs. It represents exactly the kind of technological acceleration Toffler warned would be devastatingly self-defeating.

The State Department’s AI-powered “Catch and Revoke” program amplifies acceleration as well, with a senior official boasting that “AI is one of the resources available to the government that’s very different from where we were technologically decades ago.” Well, well General LeMay would say, now that we have the nuclear bombs, what are we waiting for, let’s drop them all and get this Cold War over with already! He literally said that, for those of us who appreciate the importance of studying history.

Source: “Dar-win or Lose: the Anthropology of Security Evolution,” RSA Conference 2016

As The Atlantic reports, what was intended to be a careful testing ground immersed in scientific rigor is being transformed into a casino-like gambling table to replace human judgment across federal agencies. At the very moment human judgment is most needed for complex social and political determinations with disruptive technology, the administration keeps talking about rapid speed of implementation to replace any careful consideration of potential consequences.

You could perhaps say Elon Musk has been pulling necessary sensors from autopilot cars as an “efficiency” move (ala DOG-efficiency), at the very moment every expert in transit safety says such a mistake will predictably cause horrible death and destruction. We in fact need the government workers, we in fact need the agencies, just like we in fact need LiDAR in autopilot cars detecting dangers ahead to ensure the system is designed for necessary action to avoid disaster.

The Chaotic Actor Problem

Political scientist Graham Allison introduced the concept of “organizational process models” to explain how bureaucracies function based on standard operating procedures rather than rational calculation. But what happens when leadership resembles what computer scientists call a “memoryless process” of self-serving chaos, where each new state depends only on the current inputs, not on any history that led there?

A leader who approaches each day with no memory of previous positions, much like an AI chatbot that restarts each conversation with limited context due to token constraints, creates a toxic tyrannical governance pattern that:

  • Disregards Path Dependency: Ignores how previous decisions constrain future options
  • Fails to Recognize Patterns: Misses recurring issues that require consistent approaches
  • Creates Strategic Incoherence: Generates contradictory policies that undermine long-term objectives

Historians have noted how authoritarian systems in the 1930s disrupted institutional stability through what scholars later termed “permanent improvisation”, forcing unpredictable governance to replace rule of law with only a loyalty test to Hitler. The current administration’s approach to governance shares concerning similarities with historical authoritarian systems (Hitler’s Germany) that relied on constant policy shifts and disregard for factual consistency.

The danger of the memoryless paradigm appears to be materializing in real time. The Atlantic reports that the GSA chatbot, which could be used to “plan large-scale government projects, inform reductions in force, or query centralized repositories of federal data”, now operates with the same limitations as commercial AI systems.

Systems that very notoriously struggle to reach factual accuracy, that exhibit dangerous biases, and that have no true understanding of context or consequences, are unfit to be implemented without governance. But for the memoryless anti-governance actor, it’s like pulling the trigger on an automatic weapon swinging wildly without caring at all about who or what is being hurt.

The State Department’s “Catch and Revoke” program represents perhaps the most alarming implementation of this memoryless approach. Policing speech and using faulty technology is like a nightmare straight out of the President Jackson experience (leading into Civil War) or President Wilson experience (leading into WWII). Some have compared today’s AI surveillance to the more recent President Nixon experience and “Operation Boulder” from 1972. Remember when Dick Cheney admitted he had been hired into the Nixon administration to help find students to jail for opposing Nixon? America has not the best track record on this and yet today’s technology is different because it makes the scope vastly more expansive and the consequences more immediate.

As one departed GSA employee noted regarding AI analysis of contracts: “if we could do that, we’d be doing it already.”

The rush into flawed systems creates “a very high risk of flagging false positives,” yet there appears to be little consideration of checks against this risk, further proving memoryless governance fails to learn from past technological overreach. This concern becomes even more acute when the stakes involve not just contracts but people’s citizenship status, as evidence emerges of students leaving the country after visa cancellations related to their speech.

Constructivism vs. Algorithmic Reductionism

International relations theorist Alexander Wendt’s constructivist approach argues that the structures of international politics are not predetermined but socially constructed through shared ideas and interactions. AI-driven policy, by contrast, operates on algorithmic reductionism, that horribly reduces complex social constructs to simplified computable variables.

Imagine trying to represent social interaction as a simple mathematical formula. Hint: Jeremy Bentham tried hard and failed. We know from his extensive work that it doesn’t work.

The AI generated questionnaire sent to the UN is an attempt categorize humanitarian organizations as either aligned or misaligned with American interests. Such a stupid presentation of American thought reflects a reductionist approach, ignoring what constructivists would recognize as the evolving, socially constructed nature of international cooperation.

It’s like American foreign policy being turned into a slow robot wearing a big hat and saying repeatedly “Hello, I am from America, please answer whether I should hate you”.

The State Department’s new “Catch and Revoke” program employs AI to scan social media posts of foreign students for content that “appears to endorse” terrorism. This collapses complex political discourse into binary classifications that leave no room for nuance, context, or constructivist understanding of how meaning is socially negotiated. And that’s not to mention, again, Facebook says that they’ve conclusively proven that the technology isn’t capable of this application so they’re disabling speech monitoring.

Think about the politics of Facebook saying all speech has to be allowed to flow because even their best and most well-funded tech simply can’t scan it properly, while the federal government plows into execution of harsh judgment based on rushed, low-budget tech with dubious operators.

Orwellian Optimization Without Context

Chess algorithms excel at optimizing for clearly defined objectives: capture the opponent’s pieces, protect your own, and ultimately checkmate the opposed king. Similarly, an AI tasked with “reducing foreign aid spending” or “prioritizing America first” is surely going to generate questions designed to create easily broken (gamed if you will) classifications without grasping even a little of the complex ecosystem of international humanitarian work.

Playing Tic-Tac-Toe With Baseballs

Political scientist Joseph Nye’s concept of “soft power” — the ability to shape others’ preferences through attraction rather than force and coercion — becomes particularly relevant here. A chess player who can only ever focus on a next move will inevitably lose to someone thinking five moves ahead (assuming they both play by the rules, instead of believing they can never lose). Similarly, questionnaires that reduce complex international relationships to yes/no questions miss how the dismantling of humanitarian cooperation rapidly diminishes America’s soft power projection. Trust in America is evaporating and it’s not hard to see why if you can think more than a single move ahead.

Human Cost of Algorithmic Governance

We know from Elon Musk’s use of AI in Tesla that many more people are dying than would have without the use of AI. The cars literally run over people due to operators failing to appreciate and prepare for when their car will run over people. Why? Because Elon Musk’s aggressive promotion of emerging technologies despite documented limitations raises questions about… ability to see harms. His well-researched methods of public sentiment attack — similar to advance fee fraud — are known to be highly successful in disarming even the most intelligent (e.g. doctors, lawyers, engineers) when they lack domain expertise necessary to judge his fantasy-level claims of a miraculous future. So if such a deadly pattern of deceptive planning becomes normalized into federal government, what might we expect?

  • Safety Margin Collapse: Complex humanitarian principles based on deep knowledge like neutrality and impartiality become impossible to maintain when forced into binary classifications. Similarly, as The Atlantic reports, the nuanced judgment of civil servants is being replaced by AI systems that struggle with “hallucination,” “biased responses,” and “perpetuated stereotypes”, all acknowledged risks on the GSA chat help page. This loss of nuance extends to political speech, where the State Department is using AI to determine if social media posts “appear pro-Hamas”, which is so vague it could capture legitimate political discourse about protecting Israelis from harm. I can’t overemphasize the danger of this collapse, like warning how the machine-gun poking out of a balcony in Las Vegas exploited the binary mindset on gun control forced by the NRA.
  • Accelerated Policy Shifts: What the infamous Henry Kissinger liked to call the “architecture of the international order” will degrade rapidly not through deliberative process but through algorithmic errors reminiscent of the Cuban Missile Crisis. Domestically, we’re already seeing this acceleration, with DOGE advisers reportedly feeding sensitive agency spending data into AI programs to identify cuts and using AI to determine which federal employees should keep their jobs. Need I mention that AI programs lack privacy controls? The OPM breach was minor compared to DOGE levels of security negligence. The State Department’s AI initiative already resulted in push-button visa revocations and at least one student leaving the country like in a Kafka novel, bypassing deliberative process and representation in human judgment.
  • Feedback Loops: As organizations adapt their responses to pass algorithmic filters, we risk creating what sociologist Robert Merton called a “self-fulfilling prophecy” of a system that outputs the adversarial relationships it was designed to detect. This dynamic resembles how some surveillance technology companies may inadvertently create the very problems they claim to solve, potentially creating systems (e.g. Palantir) that generate false positives while marketing themselves as solutions. This mirrors the current situation where, as one former GSA employee told The Atlantic, AI flagging of “potential fraud” will likely generate a fraud from numerous false positives, where no checks appear to be in place. Free speech advocates are already noting the “chilling effect” on visa holders’ willingness to engage in constitutionally protected speech, which is exactly the kind of feedback loop that reinforces compliance through false positives at the expense of democratic values.

Closing One Eye Around the Blind, Making Moves Against One-Move Thinking

Francis Fukuyama, despite his “End of History” thesis, later recognized that liberal democracy requires ongoing maintenance and adaptation. Similarly, effective governance, like chess mastery, requires thinking many moves ahead and understanding the entire board. It demands appreciation for strategy, history, and the complex interplay of all pieces far beyond mechanical application of rules.

The contrast between governance approaches is striking. The previous administration’s executive order on AI emphasized “thorough testing, strict guardrails, and public transparency” before deployment. As a long-time AI security hacker I can’t agree enough that this is the only way to get to where we need to go, to innovate in security necessary to make AI trustworthy at all. However, the current radical approach by anti-government extremists dismantling representative government, as The Atlantic reports, appears to treat “the entire federal government as a sandbox, and the more than 340 million Americans they serve as potential test subjects.”

Tesla’s autopilot technology has been associated with a rapid rise in preventable fatalities, raising serious questions about whether the technology was deployed before adequate safety testing. The rapid deployment of unproven AI systems with life-or-death consequences represents a concerning pattern that appears to prioritize technological short-cuts and false efficiency over rigorous safety protocols to emphasize long term savings.

This divergence is plainly visible in policy moves that have all the hallmarks of loyalists appointed by Trump to gut the government and replace it with incompetence and graft machines. Whereas determining whether a move constitutes risk traditionally required careful human judgment weighing multiple factors to see into the outcomes, the “Catch and Revoke” program reflects a chess player focused solely on a current move and completely blind to what’s ahead. When AI flags a social media post as “appearing pro” anything, that alone can trigger a massive change in civil rights now. This is having real-world consequences, just like Tesla has been killing so many people with no end in sight. Raising alarm about constitutional implications of unregulated AI should be in context of allowing Tesla to continue to operate manslaughter robots on public roads.

The rise in all these AI developments exemplify a radical difference in concepts of integrity and what constitutes a breach, between strategic chess thinking and playing one move at a time.

If we’re entering an era where AI systems—or leaders who operate with similar memoryless, contextless approaches—are increasingly involved in policy implementation, we must find ways to reintroduce institutional memory, historical context, and strategic foresight.

Otherwise, we risk a future where both international relations and domestic governance are reduced to a poorly played game ruled by self-defeating cheaters—as real human lives hang in the balance. The binary questionnaire to UN agencies, the rapid deployment of AI across federal agencies, and the algorithmic policing of social media aren’t just parallel developments—they’re complementary moves in the same dangerous game of governance without memory, context, or foresight.

We’re a decade late on this already. Please recognize the pattern before the game reaches its destructive conclusion. The Cuban missile crisis was a race to a place where nobody is a winner, and we’re not far from the repeating that completely stupid game in taking one selfish and suicidal step at a time.

The book that inspired Dr. Strangelove