The legendLord knows if anyone thought a Hellcat Redeye guzzler could fend off a new electric variant. Of course an EV performance package on the Charger has a better result at face value.
Let’s look at the data. A Charger EV’s performance metrics reveal some fundamental engineering signals typical of big battery upgrades to old dirty burners. At 5,925 pounds, this vehicle clocks in some mass inefficiency. That beefy three ton design for a two-door car is objectively weird from a systems perspective.
Despite big car weight inefficiency, the 670hp electric drivetrain coupled with AWD achieves 0-60 in 3.3s compared with the Hellcat’s slippy rear-wheel 3.6s. This delta is expected given better traction coupled with electric motor advantages (peak torque at 0 RPM) versus an always disappointing ICE torque curve dependency.
What’s telling is the two converge at 100mph (8.0s vs 7.8s) and quarter-mile trap speeds (119mph vs 125mph), demonstrating battery-electric has a designed performance curve under sustained load. The 136mph top speed limitation further confirms power delivery designs of the current battery architecture.
The braking performance (151ft from 70mph) is adequate given its mass, but “seesawing” behavior and “excessive understeer” during skidpad testing sounds like some suboptimal weight distribution and chassis tuning. That means significant security concerns in emergency avoidance scenarios.
All in all, the Hellcat is yesterday’s lettuce. Nonetheless the Charger EV simply beats it, without flourish, and could have done much better… given where performance norms are at these days for new cars. I had expected a sub 3s performance, maybe even approaching sub 2s. And better handling. Low and middle center of gravity should be leveraged into a handling upgrade.
Welcome to the Stupidity of AI-Powered Policy: When Governance is Reduced to One-Move Chess
Send it!
A profound shift in American governance has been signaled by three recent “AI” developments in the news.
First, the BBC says that United Nations aid agencies have received a dubious 36-question form from the US Office of Management and Budget asking if they harbor “anti-American” beliefs or communist affiliations. That in itself should be proof enough that AI systems are totally incapable of preventing themselves from making an accidental launch of nuclear missiles.
Second, the Atlantic tells us how Department of Government Efficiency (DOGE) appears to be rapidly implementing AI systems in federal agencies despite significant concerns about their readiness, with plans to replace human workers with incompetent-robot operators at the General Services Administration (GSA). This is much in the same as Tesla initially boasting it would replace all workers with robots, which failed horribly and caused a rapid roll-back in disaster mode.
This all comes as Facebook, just as one obvious example, has said content generation and moderation is a bust because of unavoidable integrity breaches in automated communications systems.
Zuckerberg acknowledged more harmful content will appear on the platforms now.
The “best” attempts by Facebook (notably started by someone accused at Harvard of making no effort at all to avoid harm) have been just wrong, like laughably wrong and in the worst ways, such that they can’t be taken seriously.
This week [in 2021] we reported the unsurprising—but somehow still shocking—news that Facebook continues to push anti-vax groups in its group recommendations despite its two separate promises to remove false claims about vaccines and to stop recommending health groups altogether.
Foreshadowing clumsy and toxic American social media platforms in 2025, Indian troops in the Egyptian desert get a laugh from one of the leaflets which Field Marshal Erwin Rommel has taken to dropping behind the British lines after his 1942 ground attacks failed. The leaflet, which of course were strongly anti-British in tone, were printed in Hindustani, but far too crude to be effective. (Photo was flashed to New York from Cairo by radio. Credit: ACME Radio Photo)
However, despite the best engineers warning AI technology is unsafe and unable to deliver safe communications without human expertise, we see the three parallel developments above are not isolated policy shifts.
They appear to be lazy, rushed, careless initiatives that represent a fundamental transformation in governance from thoughtful outcome-oriented service to an unaccountable extract-and-run gambit. It’s a shift from career public servants making things work through a concentration of significant effort, to privileged disruptive newcomers feeling entitled to rapid returns without any idea of what they are even asking. The contextless, memory-less nature of both the latest AI systems and certain rushed anti-human leadership styles are now upon us.
The One-Move-at-a-Time Problem in International Relations
When powerful AI systems are deployed in policy contexts without proper human oversight, governance begins to resemble what international relations theorist Robert Jervis would call a “perceptual mismatch” and actors will fail to understand the complex interdependence that shapes the global system.
It becomes a game of chess played one move at a time, with no strategy beyond the immediate decision other than selfish gains.
There is a query [to the UN] about projects which might affect “efforts to strengthen US supply chains or secure rare earth minerals”.
This is the worst possible way to play on the world stage, revealing evidence of an inability to think, learn, adapt or improve. America looks sloppy and greedy, a kind of desperation for wealth extraction, like a 1960s dictatorship.
A Tofflerian Acceleration Crisis
Alvin Toffler, in his seminal work “Future Shock” (1970), warned about the psychological state of individuals and entire societies suffering from “too much change in too short a period of time.” What Toffler was warning us about is how AI-driven governance would accelerate our political systems in ways that would frighten the anti-science communities into a panic. The domain shift opens a vacuum of trust that we might call “policy shock”, enabling “strong men” (snakeoil) decisions made in spite of (ignoring) historical context, by removing consideration of second-order effects.
We go from a line with points on it to no lines at all, just a bunch of points.
The UN questionnaire perfectly embodies this anti-science acceleration crisis: complex geopolitical relationships developed over decades since World War II reduced to thoughtless binary questions, processed in a flawed algorithmic rush to check unaccountable lists rather than an intelligent/diplomatic pace for measured outcomes.
Similarly, the GSA’s rapid deployment of AI chatbots, conceived as an experimental “sandbox” under the previous administration, are being fast-tracked as a productivity tool amid mass layoffs. It represents exactly the kind of technological acceleration Toffler warned would be devastatingly self-defeating.
The State Department’s AI-powered “Catch and Revoke” program amplifies acceleration as well, with a senior official boasting that “AI is one of the resources available to the government that’s very different from where we were technologically decades ago.” Well, well General LeMay would say, now that we have the nuclear bombs, what are we waiting for, let’s drop them all and get this Cold War over with already! He literally said that, for those of us who appreciate the importance of studying history.
Source: “Dar-win or Lose: the Anthropology of Security Evolution,” RSA Conference 2016
As The Atlantic reports, what was intended to be a careful testing ground immersed in scientific rigor is being transformed into a casino-like gambling table to replace human judgment across federal agencies. At the very moment human judgment is most needed for complex social and political determinations with disruptive technology, the administration keeps talking about rapid speed of implementation to replace any careful consideration of potential consequences.
You could perhaps say Elon Musk has been pulling necessary sensors from autopilot cars as an “efficiency” move (ala DOG-efficiency), at the very moment every expert in transit safety says such a mistake will predictably cause horrible death and destruction. We in fact need the government workers, we in fact need the agencies, just like we in fact need LiDAR in autopilot cars detecting dangers ahead to ensure the system is designed for necessary action to avoid disaster.
The Chaotic Actor Problem
Political scientist Graham Allison introduced the concept of “organizational process models” to explain how bureaucracies function based on standard operating procedures rather than rational calculation. But what happens when leadership resembles what computer scientists call a “memoryless process” of self-serving chaos, where each new state depends only on the current inputs, not on any history that led there?
A leader who approaches each day with no memory of previous positions, much like an AI chatbot that restarts each conversation with limited context due to token constraints, creates a toxic tyrannical governance pattern that:
Disregards Path Dependency: Ignores how previous decisions constrain future options
Fails to Recognize Patterns: Misses recurring issues that require consistent approaches
Creates Strategic Incoherence: Generates contradictory policies that undermine long-term objectives
Historians have noted how authoritarian systems in the 1930s disrupted institutional stability through what scholars later termed “permanent improvisation”, forcing unpredictable governance to replace rule of law with only a loyalty test to Hitler. The current administration’s approach to governance shares concerning similarities with historical authoritarian systems (Hitler’s Germany) that relied on constant policy shifts and disregard for factual consistency.
The danger of the memoryless paradigm appears to be materializing in real time. The Atlantic reports that the GSA chatbot, which could be used to “plan large-scale government projects, inform reductions in force, or query centralized repositories of federal data”, now operates with the same limitations as commercial AI systems.
Systems that very notoriously struggle to reach factual accuracy, that exhibit dangerous biases, and that have no true understanding of context or consequences, are unfit to be implemented without governance. But for the memoryless anti-governance actor, it’s like pulling the trigger on an automatic weapon swinging wildly without caring at all about who or what is being hurt.
The State Department’s “Catch and Revoke” program represents perhaps the most alarming implementation of this memoryless approach. Policing speech and using faulty technology is like a nightmare straight out of the President Jackson experience (leading into Civil War) or President Wilson experience (leading into WWII). Some have compared today’s AI surveillance to the more recent President Nixon experience and “Operation Boulder” from 1972. Remember when Dick Cheney admitted he had been hired into the Nixon administration to help find students to jail for opposing Nixon? America has not the best track record on this and yet today’s technology is different because it makes the scope vastly more expansive and the consequences more immediate.
As one departed GSA employee noted regarding AI analysis of contracts: “if we could do that, we’d be doing it already.”
The rush into flawed systems creates “a very high risk of flagging false positives,” yet there appears to be little consideration of checks against this risk, further proving memoryless governance fails to learn from past technological overreach. This concern becomes even more acute when the stakes involve not just contracts but people’s citizenship status, as evidence emerges of students leaving the country after visa cancellations related to their speech.
Constructivism vs. Algorithmic Reductionism
International relations theorist Alexander Wendt’s constructivist approach argues that the structures of international politics are not predetermined but socially constructed through shared ideas and interactions. AI-driven policy, by contrast, operates on algorithmic reductionism, that horribly reduces complex social constructs to simplified computable variables.
Imagine trying to represent social interaction as a simple mathematical formula. Hint: Jeremy Bentham tried hard and failed. We know from his extensive work that it doesn’t work.
The AI generated questionnaire sent to the UN is an attempt categorize humanitarian organizations as either aligned or misaligned with American interests. Such a stupid presentation of American thought reflects a reductionist approach, ignoring what constructivists would recognize as the evolving, socially constructed nature of international cooperation.
It’s like American foreign policy being turned into a slow robot wearing a big hat and saying repeatedly “Hello, I am from America, please answer whether I should hate you”.
The State Department’s new “Catch and Revoke” program employs AI to scan social media posts of foreign students for content that “appears to endorse” terrorism. This collapses complex political discourse into binary classifications that leave no room for nuance, context, or constructivist understanding of how meaning is socially negotiated. And that’s not to mention, again, Facebook says that they’ve conclusively proven that the technology isn’t capable of this application so they’re disabling speech monitoring.
Think about the politics of Facebook saying all speech has to be allowed to flow because even their best and most well-funded tech simply can’t scan it properly, while the federal government plows into execution of harsh judgment based on rushed, low-budget tech with dubious operators.
Orwellian Optimization Without Context
Chess algorithms excel at optimizing for clearly defined objectives: capture the opponent’s pieces, protect your own, and ultimately checkmate the opposed king. Similarly, an AI tasked with “reducing foreign aid spending” or “prioritizing America first” is surely going to generate questions designed to create easily broken (gamed if you will) classifications without grasping even a little of the complex ecosystem of international humanitarian work.
Playing Tic-Tac-Toe With Baseballs
Political scientist Joseph Nye’s concept of “soft power” — the ability to shape others’ preferences through attraction rather than force and coercion — becomes particularly relevant here. A chess player who can only ever focus on a next move will inevitably lose to someone thinking five moves ahead (assuming they both play by the rules, instead of believing they can never lose). Similarly, questionnaires that reduce complex international relationships to yes/no questions miss how the dismantling of humanitarian cooperation rapidly diminishes America’s soft power projection. Trust in America is evaporating and it’s not hard to see why if you can think more than a single move ahead.
Human Cost of Algorithmic Governance
We know from Elon Musk’s use of AI in Tesla that many more people are dying than would have without the use of AI. The cars literally run over people due to operators failing to appreciate and prepare for when their car will run over people. Why? Because Elon Musk’s aggressive promotion of emerging technologies despite documented limitations raises questions about… ability to see harms. His well-researched methods of public sentiment attack — similar to advance fee fraud — are known to be highly successful in disarming even the most intelligent (e.g. doctors, lawyers, engineers) when they lack domain expertise necessary to judge his fantasy-level claims of a miraculous future. So if such a deadly pattern of deceptive planning becomes normalized into federal government, what might we expect?
Safety Margin Collapse: Complex humanitarian principles based on deep knowledge like neutrality and impartiality become impossible to maintain when forced into binary classifications. Similarly, as The Atlantic reports, the nuanced judgment of civil servants is being replaced by AI systems that struggle with “hallucination,” “biased responses,” and “perpetuated stereotypes”, all acknowledged risks on the GSA chat help page. This loss of nuance extends to political speech, where the State Department is using AI to determine if social media posts “appear pro-Hamas”, which is so vague it could capture legitimate political discourse about protecting Israelis from harm. I can’t overemphasize the danger of this collapse, like warning how the machine-gun poking out of a balcony in Las Vegas exploited the binary mindset on gun control forced by the NRA.
Accelerated Policy Shifts: What the infamous Henry Kissinger liked to call the “architecture of the international order” will degrade rapidly not through deliberative process but through algorithmic errors reminiscent of the Cuban Missile Crisis. Domestically, we’re already seeing this acceleration, with DOGE advisers reportedly feeding sensitive agency spending data into AI programs to identify cuts and using AI to determine which federal employees should keep their jobs. Need I mention that AI programs lack privacy controls? The OPM breach was minor compared to DOGE levels of security negligence. The State Department’s AI initiative already resulted in push-button visa revocations and at least one student leaving the country like in a Kafka novel, bypassing deliberative process and representation in human judgment.
Feedback Loops: As organizations adapt their responses to pass algorithmic filters, we risk creating what sociologist Robert Merton called a “self-fulfilling prophecy” of a system that outputs the adversarial relationships it was designed to detect. This dynamic resembles how some surveillance technology companies may inadvertently create the very problems they claim to solve, potentially creating systems (e.g. Palantir) that generate false positives while marketing themselves as solutions. This mirrors the current situation where, as one former GSA employee told The Atlantic, AI flagging of “potential fraud” will likely generate a fraud from numerous false positives, where no checks appear to be in place. Free speech advocates are already noting the “chilling effect” on visa holders’ willingness to engage in constitutionally protected speech, which is exactly the kind of feedback loop that reinforces compliance through false positives at the expense of democratic values.
Closing One Eye Around the Blind, Making Moves Against One-Move Thinking
Francis Fukuyama, despite his “End of History” thesis, later recognized that liberal democracy requires ongoing maintenance and adaptation. Similarly, effective governance, like chess mastery, requires thinking many moves ahead and understanding the entire board. It demands appreciation for strategy, history, and the complex interplay of all pieces far beyond mechanical application of rules.
The contrast between governance approaches is striking. The previous administration’s executive order on AI emphasized “thorough testing, strict guardrails, and public transparency” before deployment. As a long-time AI security hacker I can’t agree enough that this is the only way to get to where we need to go, to innovate in security necessary to make AI trustworthy at all. However, the current radical approach by anti-government extremists dismantling representative government, as The Atlantic reports, appears to treat “the entire federal government as a sandbox, and the more than 340 million Americans they serve as potential test subjects.”
Tesla’s autopilot technology has been associated with a rapid rise in preventable fatalities, raising serious questions about whether the technology was deployed before adequate safety testing. The rapid deployment of unproven AI systems with life-or-death consequences represents a concerning pattern that appears to prioritize technological short-cuts and false efficiency over rigorous safety protocols to emphasize long term savings.
This divergence is plainly visible in policy moves that have all the hallmarks of loyalists appointed by Trump to gut the government and replace it with incompetence and graft machines. Whereas determining whether a move constitutes risk traditionally required careful human judgment weighing multiple factors to see into the outcomes, the “Catch and Revoke” program reflects a chess player focused solely on a current move and completely blind to what’s ahead. When AI flags a social media post as “appearing pro” anything, that alone can trigger a massive change in civil rights now. This is having real-world consequences, just like Tesla has been killing so many people with no end in sight. Raising alarm about constitutional implications of unregulated AI should be in context of allowing Tesla to continue to operate manslaughter robots on public roads.
The rise in all these AI developments exemplify a radical difference in concepts of integrity and what constitutes a breach, between strategic chess thinking and playing one move at a time.
If we’re entering an era where AI systems—or leaders who operate with similar memoryless, contextless approaches—are increasingly involved in policy implementation, we must find ways to reintroduce institutional memory, historical context, and strategic foresight.
Otherwise, we risk a future where both international relations and domestic governance are reduced to a poorly played game ruled by self-defeating cheaters—as real human lives hang in the balance. The binary questionnaire to UN agencies, the rapid deployment of AI across federal agencies, and the algorithmic policing of social media aren’t just parallel developments—they’re complementary moves in the same dangerous game of governance without memory, context, or foresight.
We’re a decade late on this already. Please recognize the pattern before the game reaches its destructive conclusion. The Cuban missile crisis was a race to a place where nobody is a winner, and we’re not far from the repeating that completely stupid game in taking one selfish and suicidal step at a time.
I won’t bore you up front with the fact that Tesla since 2016 has repeatedly been proven by engineers to be a dangerous fraud.
Truth can be boring, and safety design failures can be especially boring.
Take for example this dangerous design flaw in a Tesla cast aluminum frame explained by basic physics. See what I mean? No of course you don’t because… ZZZZZZ you fell asleep by the time I said “explained”. I get it, I do. As a security guy with decades of board-room experience, I have seen it many times before. I’ve literally watched a CEO in a zillion dollar NYC skyscraper office fall asleep when he is being presented with evidence of immediate danger to his customers. Wake up America.
Huh? What? Why should he care? Get woke for what?
Spoiler alert. Simple tests of Tesla dangerous design decisions repeatedly have proven them to be a known deadly threat to public safety… for a very long time. This post is about a cool new kid on the block.
March 2025 a Tesla autopilot still runs over children like it’s 2016. Over 50 people have so far been killed by Tesla autopilot design flaws.
Well, a dead new kid.
A mannequin of a kid run over by a Crunchlabs funded Tesla (in partnership with a Lidar company)… like a drunken Russian sea captain crashing broadside into an anchored oil tanker. Side burn, literally.
A catastrophic demonstration of information warfare: The Solong container ship’s unnatural trajectory into a U.S. military oil tanker in March 2025 bears all the hallmarks of exploitation in known navigation system vulnerabilities.
So let’s consider that anyone with an ounce of integrity has known for decades why and how LiDAR is considered by experts to be a mandatory autopilot safety requirement. Navigation safety is very well known and extensively studied, despite Tesla pretending like it’s all up in the air. Cameras are very poor at object recognition due to visibility issues and, just to make this point abundantly clear, Tesla implemented low-quality cheap webcams. It’s such an intentionally bad design decision as to appear some kind of sick, cruel joke on all those who have died as a result of trusting Tesla with their lives.
Everyone knew that Elon Musk removing LiDAR was a short sighted (no pun intended) “cost cutting” measure that would definitely kill people, let alone the subsequent “cost cutting” by slapping in flawed AI with cheap and weak cameras…. And yet, how many saw it as foreshadowing for Elon Musk in 2025 rapidly firing American government workers and shutting down critical agencies as “unnecessary” in his latest unhinged “cost cutting” fallacy? Remember when he argued “the best part is no part” as if to warn everyone he believed the best life would be no life at all? As if to admit everyone was going to die?
Making Americans Dead Again.
Just as dozens of people unnecessarily have been killed by Tesla autopilot by having its LiDAR stupidly cut out (leading to a Tesla death rate higher than domestic terrorism), Elon Musk is on track (no pun intended) to get even more people killed with his extremist nihilist “the best is nothing” attack on American federal government services.
Killing children is by design, I’m afraid. ‘Pro-natalists’ like Musk claim they aren’t racist, but their pressure to have children is solely focused on white women, while they back policies that literally kill non-white children. He’s a eugenicist.
Notably, children in the road statistically tend to be poorer, which in America’s historic caste system means non-white children are far more in danger from Tesla than white.
March 2025 a Tesla autopilot crashes through a wall and runs over a mannequin, totally blind to objects and humans in the road. Arguably the brand only has gotten worse as it intentionally removes critical safety equipment, slashing costs despite known risks to life and property.
I say all that because this flashy new video shouldn’t be making news, given it rehashes such a long-known fact. Instead it’s already at 5M views and rocketing higher and higher. The creator pumps his volume to 11 on being “first” and doing something novel. He falsely implies that we didn’t already have copious tests showing Elon Musk knowingly removed safety and intentionally caused deaths. That’s a dangerous tactic, undermining the community to benefit the content creator (subscribe, subscribe, money, money, buy this new LiDAR device).
Everyone knows the Dawn Project right? They suck at marketing, no doubt.The death total is 52 now… it rises rapidly.
Yet Dawn Project also has top notch engineers. They staff the BEST quality engineers and have been doing these Tesla mannequin demonstrations for years. Nobody should make a new video doing what the Dawn Project have been doing, nobody should fail to mention such very well known prior art… oh, wait, unless you are a fan of Walt Disney.
Ahhh, Disney. That a**hole. This new video guy is like, really into liking Disney.
Disney NEVER was about credit. Disney is the quintessential idea theft model used in attention seeking for profit. Random example? Who can forget Disney saw a woman in LA had made a successful family business after she invented tortilla chips. Doritos were then launched as a brand with high production value to bascially sell across the street from her, taking her business away in direct competition without credit due. Everyone knows about Doritos, nobody remembers the woman inventor they were meant to erase from history.
El Zarape 1950s Tortilla Factory in Los Angeles, before its President Rebecca Webb Carranza was erased from history by splashy Disney productions. Click to enlarge.
What was her name? She literally invented tortilla chips and like nobody remembers her?
That’s Disney.
Many, many prior engineers have tested and proven what is being repeated in this new high production level video. I believe my public presentations on hacking LiDAR in automotive systems (*cough* Tesla *cough*) even go way back over a decade to 2014. This unfortunately had the opposite effect intended, as high-integrity LiDAR was perceived to be expensive compared with dreams of cheap fantasy-3D-optics. By 2015 the BlackHat conference sprung even more proofs (PDF) of what I had been presenting the year before.
2015 BlackHat presentation from a test lab with results on the driverless attacks I had presented the prior year
Beyond that I have met with real LiDAR experts over the last decade as they spilled ink criticizing the safety flaws of Tesla. These are legit aerospace engineers who laid it out for all of us to see. And, like the rest of us, they got zero attention. I bet you wouldn’t even know their names. My talks were based on such priors and peers, as well as current research, and always used citations.
Very unlike Disney.
Slide from my 2014 presentation on ethics and Big Data Security as it related to driverless cars. Mannequins back then proved them blind. We knew. They knew. And yet people still bought into a blood-stained Tesla brand, killing dozens if not hundreds since then.
What’s being shown in this video is thus VERY OLD AND WELL KNOWN as a Tesla design flaw forced by Elon Musk. That can’t be said enough, and should be said in context of all the experts in agreement over time because the timeline and persistence of experts matters.
Ok, ok. I know what you’re thinking. Production matters. Getting attention matters. Ok, fine, I can’t say I don’t appreciate a novel new presentation style. I do appreciate someone having sizzle and sass of a heavily-funded influencer meme. The “gosh golly” squeaky narrator voice really spins his way into our heads like a Mickey Mouse clubhouse song. I get it. The Snow White or Mermaid story under a Disney-style big production hammer can fool people into thinking everything is new, as if kids had not been told that same story since forever before. Is this new video any different? We can appreciate his polish, surely, without also losing sight of the real origin thread, right? Over a decade of this same message, does the new video break through?
I just wonder why his high budget anti-Tesla production didn’t end with the crucial statement THREE KIDS DIED in the simulated safety tests. Too honest? Too real? The story here AGAIN is that Tesla knowingly sold straw huts to pigs while falsely claiming to be selling them space age materials that prevent wolf visits… predictably resulting in three poor little pigs being eaten by the wolf.
Let’s face it, this new “Tesla can’t see” video ups the game with some production whiz (as in propaganda wizards, like military intelligence agencies). The star of the show goes to Disneyland a lot, that’s clear, so we can understand his aspirational “wow kids” style (his marketing foo… formerly known as propaganda).
The polish is very different than the work I usually see from focused safety engineers, who correctly regard most forms of attention seeking as integrity risk or even loss/corruption. The buried lede is that a LiDAR company did the work with him, promoting safety advantages of LiDAR. It’s practically a commercial, an ad itself. And then it ends with a icky plea for subscribers.
Watch it if you want, but it simply tells you what you already should have known: Huff-a-puff, blow the camera hut down… all the Elon Musk “efficiency” really just means cheats and short-cuts that kill people in the way that corruption always does.
It’s a long video so the TL;DR is a straw hut (camera) isn’t a brick house (LiDAR) when the wolf blows in.
Elon Musk’s life of white supremacist privileges, which led to his alleged habit of intentional shortcuts of “efficiency” to avoid real engineering work, means that twenty time three little pigs are dead.
For real, over 50 people have been killed by Tesla autopilot: Tesladeaths.com Or perhaps to the real point, where has this new video guy been for the last ten years?
Elon Musk is spreading the lies thick, given his latest Tweet to pump Trump.
Look at what they did to President @realDonaldTrump. He was loved by democrats until he ran for president. Now they call him Hitler, Mussolini, Stalin, etc and try to kill him.
5:40PM March 14, 2025
Ok, that’s just so obviously false as to be dangerous proof Musk has lost touch with reality.
Donald Trump was very widely disliked by many Americans, especially in Democratic and progressive circles, long before his presidential campaign.
Trump was heavily involved in promoting the “birther” conspiracy theory well before he ran for president. He became one of the most prominent voices pushing this false narrative around 2011, about four years before he announced his presidential campaign in June 2015.
In early 2011, Trump began appearing on television programs like Fox News, where he repeatedly questioned President Obama’s birthplace and citizenship despite abundant evidence that Obama was born in Hawaii. Trump claimed to have sent investigators to Hawaii and suggested they had found concerning information (though he never provided any evidence of this).
This racist conspiracy theory was a significant part of why many people, especially Democrats and progressives, strongly disliked Trump years before he entered the presidential race. The birther movement was widely viewed as an attempt to delegitimize America’s first Black president by casting him as a foreigner and was condemned as racist by many political commentators.
Trump only publicly acknowledged that Obama was born in the United States in September 2016, during his presidential campaign because he knew how disliked he was, and even then he falsely claimed that Hillary Clinton had started the rumor.
His public image in New York City, where he was most visible, and therefore most detested, was particularly negative among long term residents who had to hear his disgusting racism for decades and watch all his ideas fail (steak, planes, vodka, casinos…).
The idea that this guy on the run from everyone trying to collect his debts suddenly became controversial only after running for president ignores his long disgraced history.
Are we done here yet?
I’ll continue. Beyond the birther fraud and being deeply unpopular anywhere he was really known, there were many other reasons he faced strong criticism:
His business reputation included allegations of fraud at Trump University
His environmental record and battles with local communities over development projects
His tabloid persona and controversial statements about women
His personal attacks against critics and celebrities
His tendency toward exaggeration and self-promotion that many found off-putting
He was a deeply polarizing figure, hated by Democrats, well before entering politics at the national level.
The historical record clearly shows that Trump wasn’t suddenly disliked only after announcing his presidential run, because he cultivated significant friction through his many immoral actions and statements for many years prior.
Elon Musk suggesting that Democrats only began criticizing him after he ran for president is false, revisionist and evidence of what’s really wrong with the both of them.
Musk also mischaracterizes the normal political process as if he doesn’t understand that candidates face heightened scrutiny because… duh, inviting attention is what a campaign is and does. Trump’s pre-existing controversies naturally received more attention once he sought the presidency, as none of it needed to be newly invented.
Let me put it another way. The only voters, Democrat or Republican, who really liked Trump before or after he ran for president, are people who aligned with his overt racism. It was unmistakable ever since the 1970s, so there’s nothing new about who really likes or hates him. And in that sense, when some particular people call him Hitler, they probably mean it as a compliment.