Another Tesla crash appears to have the tragic hallmarks of Autopilot error. It killed two people by crashing them at high speed into a concrete highway barrier, critically injuring two others.
Source: ABC7
The solo-vehicle collision involving a Tesla Model 3 happened around 2 a.m. near Reservoir Street, according to the California Highway Patrol. “For unknown reasons, the driver of the Tesla lost control of the vehicle, collided with the center divider wall, traveled up a dirt embankment and was pronounced deceased at the scene,” CHP said in a press release. The 21-year-old female driver and a man in his 30s were killed in the crash…
Patch reporters add this harrowing detail to the Tesla crash:
A critically injured baby found buried alive in debris along state Route 60 in Chino was the victim of an early morning crash.
It reads as though the occupants were asleep and unbelted with the car driving itself, killing them by driving at highway speed into a wall.
Source: ABC7
Notably, just a week ago in Texas at 230AM a Tesla drove at high-speed into a wall, similarly killing the front two passengers.
Good evening. This is Davi Ottenheimer reporting from the American highways, where a different kind of war is unfolding with mass casualties mounting and accountability scarce.
We have become a nation accustomed to technological progress without pause for moral accounting. Tonight, we bring you a report not from distant jungles and deserts, but from our own streets and highways, where Americans are dying in encounters with Tesla that operate beyond meaningful human control or consequence.
The facts, plain and unadorned: Tesla vehicles equipped with self-driving technology have been involved in at least five fatal collisions with motorcyclists. Five American lives extinguished. Five families shattered.
Brevity is the spirit of wit, and I am just not that witty. This is a long article, here is the gist of it:
The NHTSA’s self-driving crash data reveals that Tesla’s self-driving technology is, by far, the most dangerous for motorcyclists, with five fatal crashes that we know of.
This issue is unique to Tesla. Other self-driving manufacturers have logged zero motorcycle fatalities in the same time frame.
The crashes are overwhelmingly Teslas rear-ending motorcyclists.
Death from behind. A fatal stab in the back.
And yet, the response from Washington has not been to hold the architects of these systems accountable, but rather to aggressively shield them with unprecedented legal protection.
The pattern is unmistakable—Tesla vehicles approaching motorcyclists from behind, failing to detect their presence, and striking them with fatal force as if programmed at the factory for manslaughter.
What separates these incidents from the fog of tragic accidents is their selectivity: while other manufacturers with similar technology report no such motorcyclist fatalities, Tesla’s record stands alone.
The proposed punishment for anyone standing against the damaging distribution of these violent death machines? Twenty years imprisonment—a sentence more severe than many receive for violent crimes against human beings.
…Attorney General Pam Bondi has said she intends to seek a 20-year prison sentence…. Bondi has vowed to treat these attacks [on Tesla property] as “domestic terrorism”…no one appears to have been hurt in such incidents.
Would you accept the risk of 20 years in jail if you knew your actions wouldn’t hurt anyone yet could save just one life, let alone dozens of them?
Meanwhile, the corporation behind the sharp rise in fatal crashes faces minimal or no scrutiny and continues operations with government blessing for killing ever more Americans in cold blood.
One cannot help but recall the words of Senator Fulbright on our Vietnam involvement:
Power tends to confuse itself with virtue and a great nation is particularly susceptible to the idea that its power is a sign of God’s favor.
Have we now extended this confusion to one technological enterprise in particular and alone, believing Tesla’s dubiously inflated market power signals their moral exemption?
This is not about partisan politics. This is about whether we have surrendered our capacity to demand that technology serve humanity rather than endanger it by design. It is about whether we place higher value on protecting machines of a South African madman than on preserving American human life.
The motorcyclists who have died deserved better than to become statistics in a technological warfare experiment conducted on public roads. And the American public deserves leadership that prioritizes their safety over corporate interests.
Good night, and good luck.
Edward Murrow’s direct style and in-person coverage of rise of Nazism, the 1939 Nazi invasion of Poland, and the Nazi bombing of Britain brought him trust of the public and esteem among other reporters.
The splashy and reckless bombing campaign against the Houthis has stretched into months with an American price tag approaching $1 billion.
It’s one of the most inefficient military campaigns in history, with little to no results of any significance. We’re in fact witnessing a very familiar historical pattern: a superpower exhausting itself against an entrenched insurgency that simply refuses to break. This is something well-known by the 1970s although obviously some still refuse to give up comic-book fictional narratives about death from above.
The Houthis, described ominously by some intelligence as “the honey badgers of resistance,” appear to be not just surviving American strikes, but potentially benefiting immensely from them.
Bombing Has a History of Diminishing Returns
The historical record of even constant flyovers against determined insurgencies underground is dismal:
Vietnam?
Despite dropping more bombs than in all of World War II, including the heavily publicized Operation Rolling Thunder and Operation Linebacker campaigns, America couldn’t break North Vietnam’s will. The Vietnamese moved underground, dispersed their forces, and rebuilt infrastructure as quickly as it was destroyed. Each bombing raid revealed American intelligence without permanently degrading Vietnamese capabilities.
Korea?
Three years of intensive bombing failed to break North Korean resolve. More bombs dropped than all of WWII… The country simply moved critical infrastructure underground and dispersed its forces, emerging stronger and more determined. To this day the country has almost no light pollution at night.
Ethiopia?
When the Soviet Union conducted bombing campaigns against insurgents in Ethiopia in the 1970s and 1980s, they only hardened resistance and drove recruitment for rebel forces. Eritreans not only grew in power they defeated the Ethiopians, one of the oldest air forces in the world.
Afghanistan?
Two decades of air campaigns yielded little strategic advantage against the Taliban, who simply waited out each bombing campaign before returning to their previous positions.
Iraq?
America’s extensive “shock and awe” bombing campaign of 2003 created impressive visuals but failed to break Iraqi resistance. Instead, it dispersed forces and drove them underground, setting the stage for years of insurgency. Despite complete air superiority, the US couldn’t bomb its way to stability – it required years of counterinsurgency ground operations.
Lebanon?
Israel’s 2006 air campaign against Hezbollah was expected to cripple the organization within days. Instead, after 34 days and over 7,000 air strikes, Hezbollah emerged with its command structure intact and enhanced regional legitimacy. The campaign actually strengthened Hezbollah’s position politically while depleting Israel’s precision munitions.
Libya?
The 2011 NATO bombing campaign initially appeared successful in removing Gaddafi, but it created a power vacuum that insurgent groups quickly filled. Air power alone couldn’t establish political stability, and the country plunged into ongoing factional conflict despite complete NATO air dominance.
And, of coruse, Yemen?
Before the US campaign, Saudi Arabia conducted years of intensive bombing in Yemen beginning in 2015, deploying some of the most sophisticated aircraft and munitions in the world with unlimited budget. Despite a sustained air campaign, the Houthis not only survived but expanded their territorial control and missile capabilities. The Saudi air campaign cost billions while strengthening rather than weakening their adversary.
Which brings us to today.
Why Bombing Fails With Insurgencies
The Houthis must have a copy of the tried and true insurgent playbook that has frustrated big bombers for decades:
Dispersal and hardening: Critical assets are scattered and protected, often underground, limiting the damage from any single strike.
Intelligence asymmetry: Every bombing run reveals what the US knows, while yielding little new intelligence in return. The Houthis gain valuable information about American surveillance capabilities with each attack.
Resource depletion: The US burns through expensive precision munitions while the Houthis conserve their resources for opportune moments.
Narrative advantage: Each bombing campaign reinforces the Houthis’ David versus Goliath recruitment propaganda, potentially aiding troops and strengthening resolve.
Strategic patience: The Houthis have survived bombardment for years — they’re prepared to absorb punishment and outlast foreign interventions.
America Staring at the Sun
In pursuing this air campaign, America is effectively depleting its stockpiles of precision munitions for little to no benefit. Worse, it’s revealing its intelligence capabilities rapidly, blowing up any advantage it may have once held. Exhausting supplies, stations, airframes and personnel diminishes any readiness for actual need, burning down savings with minimal strategic return as if efficiency doesn’t even matter. All that waste and useless action in fact strengthens the Houthis’ legitimacy in the eyes of their supporters, throwing the whole thing upside down. In fact, increased shipping traffic through the Red Sea – celebrated as a success metric – may actually mask a surge in Houthi rearmament and resupply operations, much like increased truck traffic on the Ho Chi Minh Trail during Vietnam signaled expanded North Vietnamese logistics rather than American success. We risk misreading our own metrics, where apparent “victories” actually indicate strategic failure.
Ground Truth is Truth
History has consistently shown that air power tends to exaggerate itself yet alone rarely achieves decisive strategic objectives against determined insurgencies.
…the only times I’ve ever seen the Houthis go to the negotiating table or compromise has been when they’ve been threatened with the realistic prospect of defeat on the ground.
World War II was ultimately won on the ground in Europe, with air power as support. The Pacific theater required island-hopping ground campaigns alongside naval and air operations. The Japanese didn’t even register the nuclear bombs, because all attention was focused on Soviet advances through Manchuria–surrender came quickly after just days of Stalin’s ground offensive, and had little to nothing to do with American bombs (a truth opposite of how WWII is taught in American schools). Even the much-touted air campaign against Serbia in the 1990s only succeeded when combined with credible ground threats.
Getting Grounded
If American policymakers are serious about neutralizing the Houthi threat to Red Sea shipping, which I doubt anyone is at this point, they face some basic choices. They would have to acknowledge the inherent limitations of air power and develop a comprehensive strategy. Not going to happen. They would have to accept that complete elimination of the threat may not be possible without extremely high costs. Not going to happen. And they would have to contemplate diplomatic initiatives with regional partners who have more direct leverage. Never.
While the U.S. overplays its hand and weakens itself by the day, the Houthis will likely continue to absorb the blows while adapting and waiting out America’s expensive display of ineffective force. As they’ve demonstrated like other groups through decades of conflict, they take a high-altitude punch, get back up, and keep fighting with renewed strength.
With each passing day and over a billion dollars allegedly flushed down an empty hole, America weakens its position while strengthening the narrative of those it seeks to defeat. The honey badgers of Yemen may indeed be laughing at the loose-lips of Hegseth, because they watch the world’s most expensive military only hurt itself as it sloppily throws axes at its own shadows.
The “AI 2027” report circulating in tech circles demonstrates an institutional blindness comparable to that which undermined Napoleon’s naval strategy. The authors’ self-positioning as authoritative forecasters merits scrutiny based on historical patterns ofpredictive failure.
Those familiar with Admiral Nelson’s victories against Napoleon’s navy should immediately recognize the folly of AI 2027’s approach. Napoleon’s navy demonstrated the same institutional blindness and overconfidence that permeates this report, while Nelson’s forces easily exploited such errors through adaptability and practical tactics.
The self-crowned Emperor, who ruthlessly seized control from disregulated revolution and sped the country to a moral bottom, established a highly centralized command structure that reflected his own belief in his strategic genius. His naval strategy was in fact fatally inflexible. Admiral Nelson (not to mention the humble and oft-forgotten brilliant Admiral Collingwood) didn’t use revolutionary tactics as much as exploited fundamental weaknesses in big-splash French prediction systems.
Napoleon’s rigid forecasting prevented tactical adaptation, well documented in the Battles of the Nile (1798) and Trafalgar (1805). Despite an embrace of technological and organizational innovations, the overconfidence in culture of deference undermined any ability to respond effectively when the actual future of warfare (distributed agile asymmetric agency) landed squarely on Napoleon’s head.
Charles Minard’s renowned graphic of Napoleon’s 1812 march on Moscow. The tremendous numbers of casualties suffered shows in thinning of the lines (1 millimeter less = 10,000 men lost) through space and time.
“We Worked at OpenAI” is a Credibility Mirage
The authors prominently tout their OpenAI pedigrees as if to automatically confer upon themselves prophetic authority. But this is rather like Napoleon’s admirals flaunting their medals and imperial appointments in a rowboat while their ships and crews burn in the background.
The patronage system of the American Civil War also comes to mind, where political connections rather than competence determined who led regiments into battle—often with catastrophic results at places like Cold Harbor and Fredericksburg. I worry the report writers wouldn’t recognize those battle names, or know why they matter so much for technology predictions today. Despite recently-acquired technical credentials, their report appears disconnected from hundreds of years in lessons from industrial-era battles that best-prepare anyone to make future predictions.
Any technologist looking at future competition has to account for past rigid command structures and faith in established technologies (like massed infantry charges) that were catastrophically ineffective “all of a sudden”. Rifled muskets and entrenched positions with improved range and accuracy form an easy parallel to how technology predictions today often fail to account for underlying disruptive shifts, which will define an uncomfortable expert minority view or prove a counter-factual.
OpenAI in fact has repeatedly demonstrated itself a spectacular failure in prediction and strategy, from promising “AGI in 4 years” multiple times over the past decade to the chaotic governance crises and mass staff departures. When people flee an organization in droves, we should question whether that organization’s institutional thinking should baseline any future predictions. Working at OpenAI is presented as a credential, but it’s worth examining: did these authors shape that organization’s mis-direction, or merely soak up the internal contradictions before departing? Past affiliation with flawed institutions doesn’t automatically confer predictive authority.
Circular Logic of “Our Past Predictions Worked”
Perhaps most galling is the 2027 report writers make a bald-faced appeal to their own past predictive success. Really? Bad logic is how we are supposed to buy into their prediction prowess? “We made some predictions before and they came true, therefore trust us now.”
This is exactly the problem of induction that philosophers like Hume systematically dismantled centuries ago. Statistical reasoning would suggest that past prediction success actually gives us less confidence in future success without a sound theoretical framework.
Therefore, an over-confident technologist will make fundamental analytical mistakes when they have such a concerning gap in historical and philosophical understanding. Hume’s science of empiricism deserves the same respect in tech circles as Newton’s gravity, yet the report writers seem to acknowledge only one kind of fundamental law, leaving themselves blind to outcomes they should focus on the most.
Think about it this way: The wealthy and powerful fly in airplanes, believing they’ve conquered gravity through technology – yet they still respect gravity’s laws because ignoring them would mean their own death. Similarly, these same elites soar above society’s problems, but unlike with gravity, they often disregard ethical principles because when ethics fail, it’s usually others who suffer the consequences, not themselves.
The sheer audacity of circular credential-building of 2027 AI betrays a fundamental misunderstanding of empirical reasoning and ethical guardrails. Bertrand Russell or John Stuart Mill would have field days dismantling this logical house of cards. The authors expect us to trust them now because they were right before, without providing any causal mechanism connecting past and future predictions. This is precisely the kind of confusion Wittgenstein warned against. In the Tractatus, he was clearly anti-factualist and Hume’s influence was evident in stating the cause-effect relation cannot be observed: “belief in the causal nexus is superstition“.
The 2027 AI authors to put it simply are mistaking correlation for causation and pattern-matching for understanding. In a domain undergoing explosive non-linear change, where the underlying dynamics shift with each innovation, past predictive success may actually indicate less about future accuracy than the authors assume. Their position is weakened, not strengthened, by their own declared system of thinking. Their logic essentially bootstraps itself from nothing, a self-referential loop that generates credibility out of thin air like the bogus “miasma theory” adherents while evading the burden of actual evidence we know today as “germ theory“.
The approach resembles past adherence to miasma theory, despite emerging countervailing evidence. Semmelweis’s experience in transforming science in the 1850s demonstrated tragically how entrenched institutional thinking will resist correction even when lives depend on it.
“You Get What You Pay For” so “Here’s Our Free Opinion”
The report’s disappointing logic flaws and contradictions become even more apparent when it repeatedly invokes a “You get what you pay for” maxim regarding AI systems.
“You get what you pay for, and the best performance costs hundreds of dollars a month… Still, many companies find ways to fit AI agents into their workflows.” – AI 2027 Report
They suggest proprietary, expensive models will inevitably outperform open alternatives, while simultaneously distributing their own analysis for free. Do we question more the value of predictions that cost us less? Does a “non-profit”, issuing a free report, not see their own contradiction in saying “you get what you pay for”?
By their own logic, freely distributed predictions must be worthless.
Computing history offers clear counterevidence to this mantra: Microsoft Windows, despite higher cost and corporate backing, has consistently ceded ground to Linux in critical infrastructure. Open-source solutions survived and ultimately thrived because their distributed development model allowed for rapid adaptation and merit-based improvement. Microsoft not only lost the server market, admitting years ago their own Azure was built on a free OS instead of their own expensive one, the entire world runs on open source and open standards. TCP/IP? HTML? HTTP? HTTPS? TLS? I mean the size of mistake by AI 2027 is totally obvious right?
Do the 2027 authors recall Gopher and why it quickly faded into obscurity? Well, here’s a newsflash from 1994, it died when it began charging fees while superior options remained free. It died quickly. Microsoft Windows has died a slower death. The foundation of AI being on the Web itself—a technology these authors take for granted—stands as a powerful historical counterexample to their “you get what you pay for” philosophy. Open standards and free access have repeatedly triumphed over proprietary, fee-based approaches throughout computing history.
AI is no different. Mistral, Llama, and DeepSeek are already rapidly eroding the capabilities gap with closed models—a trend the report seems to overlook. The pattern of open systems eventually outperforming closed ones seems to be holding true in AI already, as could be reasonably expected.
Open protocols and systems eventually displace their proprietary counterparts because it’s simply logical. Imagine if in the 1980s, experts confidently predicted IBM mainframes with expensive protocols and terminals would forever dominate because low-cost or even free personal computing “can’t compete”. The AI 2027 authors seem trapped in exactly this failure of imagination that predicted a fall of IBM. The American pattern is flashy, well-funded political players make grand predictions that quiet professionals of integrity eventually discredit. Bill Gates’ early anti-hobbyist approach and hot-take memo also exemplifies how market positioning and legal firepower often outweigh technical superiority in the short term, while rarely sustaining advantage in the long term.
This pattern echoes throughout military history as well. The Civil War offers another instructive parallel: General Grant’s humanity, integrity and strategic brilliance (invented multi-domain and captured three whole armies!) over Lee’s obsession with personal appearances (killed more of his own men than any leader and murdered POW). In technology as in war, practical effectiveness ultimately outperforms superficial impressiveness, even when the latter attracts more initial attention and investment. A persistent mythologizing of a “butcher” and “monster” like Lee — despite his traitorous, inhumane and strategic disasters — mirrors how certain AI companies might continue to command admiration regardless of their actual track record.
Centralization Fixation as Regression
Perhaps most revealing is the report’s fixation on centralized computation and proprietary architectures. The authors envision mega-corporations controlling the future of AI through massive data centers and closed systems.
This brings us back to the Napoleonic naval parallel. The French built imposing warships like L’Orient – a 120-gun behemoth that cost the equivalent of billions in today’s currency, with gilded ornamentation on the stern and hand-carved figureheads meant to inspire awe. Like today’s “Billionaire Boys Club” building AI datacenters, it was a monument to centralized power that in reality means a spectacular liability.
Nelson’s more nimble, distributed fleet model utterly demolished them. L’Orient itself catastrophically exploded at the Battle of the Nile, taking France’s entire “unsinkable” fortune with it—over 20 million francs and Napoleon’s personal art treasures intended to cement his cultural authority, gone in a spectacular flash that lit the night sky for miles.
The destruction of Napoleon’s flagship L’Orient at the Battle of the Nile stands as a concrete example of centralized vulnerability. When it exploded, it took with it not just military capability but the Emperor’s concentrated resources and strategic confidence. Source: National Maritime Museum, Greenwich, London
The centralized AI companies in this scenario seem poised for their own Trafalgar moment. Napoleon’s fatal flaw was replacing competent officers with loyal ones, creating an institutional inability to learn from repeated failures. Similarly, these techno-Napoleons imagine titanic sized AI systems whose very size creates critical vulnerabilities that nimble, distributed systems with broader talent pools will likely exploit.
From Maginot to AI 2027: Pride Before the Fall
Napoleon’s naval disasters weren’t isolated historical accidents but evidence of a fundamental flaw in French strategic hubris – one that would resurface catastrophically with the Maginot Line a century later.
After WWI, French military planners, writing with absolute certainty about how future wars would unfold, committed billions to an “unassailable” defensive system of fixed fortifications. This in fact meant dangerously underfunding and neglecting the more important mobile warfare capabilities that would actually determine their fate. When the Germans simply went around these expensive, supposedly impenetrable defenses through the Ardennes Forest—a possibility French generals had dismissed as “impassable”—France collapsed in just six weeks, despite having comparable military resources on paper.
Consider this critical detail: radio—a distributed, inexpensive technology—offered an asymmetric advantage that completely upended both German and French military establishment thinking (Hitler’s rapid seizure of narrative in 1933 is attributed to just three months of radio dominance). French generals, so convinced of their strategic superiority, literally ordered radios turned off during meals to enjoy privileged quiet, missing the crucial signals of their imminent defeat. This perfectly mirrors how today’s AI centralists might underfund less expensive options and ignore emerging distributed technologies that don’t fit their worldview.
The Maginot mentality even more perfectly encapsulates the AI 2027 authors’ writing. Their report assumes massive compute resources concentrated in a few corporations will determine AI’s future, while potentially missing the blaringly loud equivalent of radio, trucks, tanks and aircraft – the nimble, distributed approaches that might render their big predictions as obsolete as a French General’s radio-silence to enjoy his cheese and wine.
What’s particularly striking is that France could have potentially defeated the Nazi invasion with rapid, agile counterattacks in the early stages. Instead, they were paralyzed partly because an agile reality didn’t conform to their expectations of “big” and “central”. Similarly, organizations following the AI 2027 roadmap might miss critical opportunities when AI inevitably, if not already, develops along very different paths than predicted.
The French technology experts didn’t fail for lack of resources or time – they failed because their institutional structures couldn’t adapt when their expensive centralized systems proved vulnerable in ways they hadn’t wanted to anticipate. This pattern of massive overconfidence in centralized, expensive systems has been historically disastrous, yet each generation seems determined to repeat it. OpenAI maybe didn’t even need to exist, in the same way Maginot didn’t need to build his wall.
Who Really Prophets? From Rousseau Into Fascism
Intellectual celebrity, like that enjoyed by Rousseau in his day, often blinds contemporaries to problematic ideas. History eventually reassesses such celebrated figures with greater clarity. Today’s AI prophets may enjoy similar reverence, but intellectual splash and fashion remains a poor guide to truth.
Mill, Russell, Hume, and Wollstonecraft (notably unpopular and shunned in their day) approached prediction and social change with methodical caution and philosophical rigor. Today they stand tall and respected, because they reported centuries ago that social and technological progress tends toward gradual, methodical change rather than the dramatic, centralized revolution portrayed in the “AI 2027” scenario.
The authors confidently assert four questionable assumptions as if they were self-evident truths:
Exponential capability gains are inevitable
Alignment will remain a persistent challenge
Centralization in a few companies is the natural trajectory
US-China competition will be the primary geopolitical dynamic
Each of these deserves serious scrutiny. The last, for example, appears increasingly questionable as the US political system faces internal crises and the geopolitical landscape rapidly shifts. Canada is positioned to leave the US behind in an alliance with the EU, and perhaps even China. Russia’s hand in launching America’s suicidal tariff war has all the hallmarks of Putin’s political targets mysteriously throwing themselves out of a window.
Don’t Pick a Sitting Duck For Your Flagship
What the “AI 2027” authors miss is that Napoleon’s naval strategy wasn’t defeated primarily by superior British technology or resources – it collapsed because its institutional structure couldn’t learn, adapt, or correct course when faced with evidence of failure.
We should approach these grand AI predictions with the skepticism they deserve – not because progress won’t happen, but because the most transformative developments in computing history have repeatedly come from directions that the imperial admirals of tech never saw coming.
When L’Orient exploded at the Battle of the Nile, the blast was so massive that both sides temporarily halted in awe. One wonders what similar moment of clarity awaits these techno-Napoleonic predictions. History suggests AI’s future likely belongs not to centralized imperial fleets, but to nimble, adaptive, distributed systems—those that deliver progress measured by genuine human benefit rather than another folly of over-concentrated power and profit.
The consequences of overreach in technology prediction have historical parallels from at least the early 1600s origins of “hacking” and “ciphers” to modern AI forecasting. It’s really quite amazing to consider how Edgar Allen Poe promoted encrypted messaging, for example, to protect Americans in the 1800s from surveillance by aggressively pro-slavery state secret police.
When leaders become insulated from corrections based on past events, which had predicted the future, they risk both their credibility and their strategic position. Ask me sometime why King Charles I had his head chopped off for British Ship Money, and I’ll tell you why Sam Altman’s nonsensical reversals and bogus predictions (let alone his loyalists) aren’t a smart fit for any true enterprise (e.g. “build bridges, not walls“).
Inside the main gate of Chepstow Castle, Wales. The curtain wall on the right was breached 25 May 1648 by Isaac Ewer’s cannons and the site where Royalist commander Sir Nicholas Kemeys was killed. Photo by me.