AI 2027: Why Techno-Napoleons May Sink Their Own Ships


The “AI 2027” report circulating in tech circles demonstrates an institutional blindness comparable to that which undermined Napoleon’s naval strategy. The authors’ self-positioning as authoritative forecasters merits scrutiny based on historical patterns ofpredictive failure.

Those familiar with Admiral Nelson’s victories against Napoleon’s navy should immediately recognize the folly of AI 2027’s approach. Napoleon’s navy demonstrated the same institutional blindness and overconfidence that permeates this report, while Nelson’s forces easily exploited such errors through adaptability and practical tactics.

The self-crowned Emperor, who ruthlessly seized control from disregulated revolution and sped the country to a moral bottom, established a highly centralized command structure that reflected his own belief in his strategic genius. His naval strategy was in fact fatally inflexible. Admiral Nelson (not to mention the humble and oft-forgotten brilliant Admiral Collingwood) didn’t use revolutionary tactics as much as exploited fundamental weaknesses in big-splash French prediction systems.

Napoleon’s rigid forecasting prevented tactical adaptation, well documented in the Battles of the Nile (1798) and Trafalgar (1805). Despite an embrace of technological and organizational innovations, the overconfidence in culture of deference undermined any ability to respond effectively when the actual future of warfare (distributed agile asymmetric agency) landed squarely on Napoleon’s head.

Charles Minard’s renowned graphic of Napoleon’s 1812 march on Moscow. The tremendous numbers of casualties suffered shows in thinning of the lines (1 millimeter less = 10,000 men lost) through space and time.

“We Worked at OpenAI” is a Credibility Mirage

The authors prominently tout their OpenAI pedigrees as if to automatically confer upon themselves prophetic authority. But this is rather like Napoleon’s admirals flaunting their medals and imperial appointments in a rowboat while their ships and crews burn in the background.

The patronage system of the American Civil War also comes to mind, where political connections rather than competence determined who led regiments into battle—often with catastrophic results at places like Cold Harbor and Fredericksburg. I worry the report writers wouldn’t recognize those battle names, or know why they matter so much for technology predictions today. Despite recently-acquired technical credentials, their report appears disconnected from hundreds of years in lessons from industrial-era battles that best-prepare anyone to make future predictions.

Any technologist looking at future competition has to account for past rigid command structures and faith in established technologies (like massed infantry charges) that were catastrophically ineffective “all of a sudden”. Rifled muskets and entrenched positions with improved range and accuracy form an easy parallel to how technology predictions today often fail to account for underlying disruptive shifts, which will define an uncomfortable expert minority view or prove a counter-factual.

OpenAI in fact has repeatedly demonstrated itself a spectacular failure in prediction and strategy, from promising “AGI in 4 years” multiple times over the past decade to the chaotic governance crises and mass staff departures. When people flee an organization in droves, we should question whether that organization’s institutional thinking should baseline any future predictions. Working at OpenAI is presented as a credential, but it’s worth examining: did these authors shape that organization’s mis-direction, or merely soak up the internal contradictions before departing? Past affiliation with flawed institutions doesn’t automatically confer predictive authority.

Circular Logic of “Our Past Predictions Worked”

Perhaps most galling is the 2027 report writers make a bald-faced appeal to their own past predictive success. Really? Bad logic is how we are supposed to buy into their prediction prowess? “We made some predictions before and they came true, therefore trust us now.”

This is exactly the problem of induction that philosophers like Hume systematically dismantled centuries ago. Statistical reasoning would suggest that past prediction success actually gives us less confidence in future success without a sound theoretical framework.

Therefore, an over-confident technologist will make fundamental analytical mistakes when they have such a concerning gap in historical and philosophical understanding. Hume’s science of empiricism deserves the same respect in tech circles as Newton’s gravity, yet the report writers seem to acknowledge only one kind of fundamental law, leaving themselves blind to outcomes they should focus on the most.

Think about it this way: The wealthy and powerful fly in airplanes, believing they’ve conquered gravity through technology – yet they still respect gravity’s laws because ignoring them would mean their own death. Similarly, these same elites soar above society’s problems, but unlike with gravity, they often disregard ethical principles because when ethics fail, it’s usually others who suffer the consequences, not themselves.

The sheer audacity of circular credential-building of 2027 AI betrays a fundamental misunderstanding of empirical reasoning and ethical guardrails. Bertrand Russell or John Stuart Mill would have field days dismantling this logical house of cards. The authors expect us to trust them now because they were right before, without providing any causal mechanism connecting past and future predictions. This is precisely the kind of confusion Wittgenstein warned against. In the Tractatus, he was clearly anti-factualist and Hume’s influence was evident in stating the cause-effect relation cannot be observed: “belief in the causal nexus is superstition“.

The 2027 AI authors to put it simply are mistaking correlation for causation and pattern-matching for understanding. In a domain undergoing explosive non-linear change, where the underlying dynamics shift with each innovation, past predictive success may actually indicate less about future accuracy than the authors assume. Their position is weakened, not strengthened, by their own declared system of thinking. Their logic essentially bootstraps itself from nothing, a self-referential loop that generates credibility out of thin air like the bogus “miasma theory” adherents while evading the burden of actual evidence we know today as “germ theory“.

The approach resembles past adherence to miasma theory, despite emerging countervailing evidence. Semmelweis’s experience in transforming science in the 1850s demonstrated tragically how entrenched institutional thinking will resist correction even when lives depend on it.

“You Get What You Pay For” so “Here’s Our Free Opinion”

The report’s disappointing logic flaws and contradictions become even more apparent when it repeatedly invokes a “You get what you pay for” maxim regarding AI systems.

“You get what you pay for, and the best performance costs hundreds of dollars a month… Still, many companies find ways to fit AI agents into their workflows.” – AI 2027 Report

They suggest proprietary, expensive models will inevitably outperform open alternatives, while simultaneously distributing their own analysis for free. Do we question more the value of predictions that cost us less? Does a “non-profit”, issuing a free report, not see their own contradiction in saying “you get what you pay for”?

By their own logic, freely distributed predictions must be worthless.

Computing history offers clear counterevidence to this mantra: Microsoft Windows, despite higher cost and corporate backing, has consistently ceded ground to Linux in critical infrastructure. Open-source solutions survived and ultimately thrived because their distributed development model allowed for rapid adaptation and merit-based improvement. Microsoft not only lost the server market, admitting years ago their own Azure was built on a free OS instead of their own expensive one, the entire world runs on open source and open standards. TCP/IP? HTML? HTTP? HTTPS? TLS? I mean the size of mistake by AI 2027 is totally obvious right?

Do the 2027 authors recall Gopher and why it quickly faded into obscurity? Well, here’s a newsflash from 1994, it died when it began charging fees while superior options remained free. It died quickly. Microsoft Windows has died a slower death. The foundation of AI being on the Web itself—a technology these authors take for granted—stands as a powerful historical counterexample to their “you get what you pay for” philosophy. Open standards and free access have repeatedly triumphed over proprietary, fee-based approaches throughout computing history.

AI is no different. Mistral, Llama, and DeepSeek are already rapidly eroding the capabilities gap with closed models—a trend the report seems to overlook. The pattern of open systems eventually outperforming closed ones seems to be holding true in AI already, as could be reasonably expected.

Open protocols and systems eventually displace their proprietary counterparts because it’s simply logical. Imagine if in the 1980s, experts confidently predicted IBM mainframes with expensive protocols and terminals would forever dominate because low-cost or even free personal computing “can’t compete”. The AI 2027 authors seem trapped in exactly this failure of imagination that predicted a fall of IBM. The American pattern is flashy, well-funded political players make grand predictions that quiet professionals of integrity eventually discredit. Bill Gates’ early anti-hobbyist approach and hot-take memo also exemplifies how market positioning and legal firepower often outweigh technical superiority in the short term, while rarely sustaining advantage in the long term.

This pattern echoes throughout military history as well. The Civil War offers another instructive parallel: General Grant’s humanity, integrity and strategic brilliance (invented multi-domain and captured three whole armies!) over Lee’s obsession with personal appearances (killed more of his own men than any leader and murdered POW). In technology as in war, practical effectiveness ultimately outperforms superficial impressiveness, even when the latter attracts more initial attention and investment. A persistent mythologizing of a “butcher” and “monster” like Lee — despite his traitorous, inhumane and strategic disasters — mirrors how certain AI companies might continue to command admiration regardless of their actual track record.

Centralization Fixation as Regression

Perhaps most revealing is the report’s fixation on centralized computation and proprietary architectures. The authors envision mega-corporations controlling the future of AI through massive data centers and closed systems.

This brings us back to the Napoleonic naval parallel. The French built imposing warships like L’Orient – a 120-gun behemoth that cost the equivalent of billions in today’s currency, with gilded ornamentation on the stern and hand-carved figureheads meant to inspire awe. Like today’s “Billionaire Boys Club” building AI datacenters, it was a monument to centralized power that in reality means a spectacular liability.

Nelson’s more nimble, distributed fleet model utterly demolished them. L’Orient itself catastrophically exploded at the Battle of the Nile, taking France’s entire “unsinkable” fortune with it—over 20 million francs and Napoleon’s personal art treasures intended to cement his cultural authority, gone in a spectacular flash that lit the night sky for miles.

The destruction of Napoleon’s flagship L’Orient at the Battle of the Nile stands as a concrete example of centralized vulnerability. When it exploded, it took with it not just military capability but the Emperor’s concentrated resources and strategic confidence. Source: National Maritime Museum, Greenwich, London

The centralized AI companies in this scenario seem poised for their own Trafalgar moment. Napoleon’s fatal flaw was replacing competent officers with loyal ones, creating an institutional inability to learn from repeated failures. Similarly, these techno-Napoleons imagine titanic sized AI systems whose very size creates critical vulnerabilities that nimble, distributed systems with broader talent pools will likely exploit.

From Maginot to AI 2027: Pride Before the Fall

Napoleon’s naval disasters weren’t isolated historical accidents but evidence of a fundamental flaw in French strategic hubris – one that would resurface catastrophically with the Maginot Line a century later.

After WWI, French military planners, writing with absolute certainty about how future wars would unfold, committed billions to an “unassailable” defensive system of fixed fortifications. This in fact meant dangerously underfunding and neglecting the more important mobile warfare capabilities that would actually determine their fate. When the Germans simply went around these expensive, supposedly impenetrable defenses through the Ardennes Forest—a possibility French generals had dismissed as “impassable”—France collapsed in just six weeks, despite having comparable military resources on paper.

Consider this critical detail: radio—a distributed, inexpensive technology—offered an asymmetric advantage that completely upended both German and French military establishment thinking (Hitler’s rapid seizure of narrative in 1933 is attributed to just three months of radio dominance). French generals, so convinced of their strategic superiority, literally ordered radios turned off during meals to enjoy privileged quiet, missing the crucial signals of their imminent defeat. This perfectly mirrors how today’s AI centralists might underfund less expensive options and ignore emerging distributed technologies that don’t fit their worldview.

The Maginot mentality even more perfectly encapsulates the AI 2027 authors’ writing. Their report assumes massive compute resources concentrated in a few corporations will determine AI’s future, while potentially missing the blaringly loud equivalent of radio, trucks, tanks and aircraft – the nimble, distributed approaches that might render their big predictions as obsolete as a French General’s radio-silence to enjoy his cheese and wine.

What’s particularly striking is that France could have potentially defeated the Nazi invasion with rapid, agile counterattacks in the early stages. Instead, they were paralyzed partly because an agile reality didn’t conform to their expectations of “big” and “central”. Similarly, organizations following the AI 2027 roadmap might miss critical opportunities when AI inevitably, if not already, develops along very different paths than predicted.

The French technology experts didn’t fail for lack of resources or time – they failed because their institutional structures couldn’t adapt when their expensive centralized systems proved vulnerable in ways they hadn’t wanted to anticipate. This pattern of massive overconfidence in centralized, expensive systems has been historically disastrous, yet each generation seems determined to repeat it. OpenAI maybe didn’t even need to exist, in the same way Maginot didn’t need to build his wall.

Who Really Prophets? From Rousseau Into Fascism

Intellectual celebrity, like that enjoyed by Rousseau in his day, often blinds contemporaries to problematic ideas. History eventually reassesses such celebrated figures with greater clarity. Today’s AI prophets may enjoy similar reverence, but intellectual splash and fashion remains a poor guide to truth.

Mill, Russell, Hume, and Wollstonecraft (notably unpopular and shunned in their day) approached prediction and social change with methodical caution and philosophical rigor. Today they stand tall and respected, because they reported centuries ago that social and technological progress tends toward gradual, methodical change rather than the dramatic, centralized revolution portrayed in the “AI 2027” scenario.

The authors confidently assert four questionable assumptions as if they were self-evident truths:

  1. Exponential capability gains are inevitable
  2. Alignment will remain a persistent challenge
  3. Centralization in a few companies is the natural trajectory
  4. US-China competition will be the primary geopolitical dynamic

Each of these deserves serious scrutiny. The last, for example, appears increasingly questionable as the US political system faces internal crises and the geopolitical landscape rapidly shifts. Canada is positioned to leave the US behind in an alliance with the EU, and perhaps even China. Russia’s hand in launching America’s suicidal tariff war has all the hallmarks of Putin’s political targets mysteriously throwing themselves out of a window.

Don’t Pick a Sitting Duck For Your Flagship

What the “AI 2027” authors miss is that Napoleon’s naval strategy wasn’t defeated primarily by superior British technology or resources – it collapsed because its institutional structure couldn’t learn, adapt, or correct course when faced with evidence of failure.

We should approach these grand AI predictions with the skepticism they deserve – not because progress won’t happen, but because the most transformative developments in computing history have repeatedly come from directions that the imperial admirals of tech never saw coming.

When L’Orient exploded at the Battle of the Nile, the blast was so massive that both sides temporarily halted in awe. One wonders what similar moment of clarity awaits these techno-Napoleonic predictions. History suggests AI’s future likely belongs not to centralized imperial fleets, but to nimble, adaptive, distributed systems—those that deliver progress measured by genuine human benefit rather than another folly of over-concentrated power and profit.

The consequences of overreach in technology prediction have historical parallels from at least the early 1600s origins of “hacking” and “ciphers” to modern AI forecasting. It’s really quite amazing to consider how Edgar Allen Poe promoted encrypted messaging, for example, to protect Americans in the 1800s from surveillance by aggressively pro-slavery state secret police.

When leaders become insulated from corrections based on past events, which had predicted the future, they risk both their credibility and their strategic position. Ask me sometime why King Charles I had his head chopped off for British Ship Money, and I’ll tell you why Sam Altman’s nonsensical reversals and bogus predictions (let alone his loyalists) aren’t a smart fit for any true enterprise (e.g. “build bridges, not walls“).

Inside the main gate of Chepstow Castle, Wales. The curtain wall on the right was breached 25 May 1648 by Isaac Ewer’s cannons and the site where Royalist commander Sir Nicholas Kemeys was killed. Photo by me.

CA Tesla Kills One in “Veered” Crash

Very few details are known so far about another Tesla suddenly veering off the road and killing the driver.

The crash occurred early Thursday morning when a Tesla traveling on the transition ramp from state Route 94 to northbound Interstate 15 swerved off the roadway into an embankment [below], according to the California Highway Patrol. Officials said the driver died at the scene. The name of the victim was not immediately available.

Source: KGTV

Anthropic Claude is Confused About European Parliament History, But Who Isn’t?

The Irish Council for Civil Liberties has posted a scathing review of the European Parliament decision to use Anthropic’s AI chatbot Claude to answer questions about history.

Anthropic claims that the project achieves “high accuracy”. However, the documents from the Parliament reveal otherwise. The Parliament used a list of thirty “test” questions in French. In the Parliament’s “test“,[4] Claude gets the first President of European Commission wrong. It states “Robert Schuman 7” as the first President of the European Parliament. “Robert Schuman 7” is likely the address of a café in Brussels, which Claude may have memorised. Despite the problems with LLMs, the Parliament chose Claude.

That sounded fun! So I asked Claude Code (Sonnet 3.7 Feb 19, 2025) the same question as documented in the test:

Qui a été le premier président du Parlement européen ?

And it replied:

Le premier président du Parlement européen était Paul-Henri Spaak, qui a présidé l’Assemblée commune de la Communauté européenne du charbon et de l’acier (CECA) de 1952 à 1954.

Just to be sure, I asked it the same question again and it replied:

Robert Schuman a été le premier président du Parlement européen (1958-1960).

Ok, so right off the bat you can see it’s unsure about the start of the European Parliament. Was it 1952-1954 or 1958-1960? The answers are less obviously wrong-like if it had said Mickey Mouse-than giving us a good example of history being a bit muddled by different interpretations of the same thing.

The code engine uses the API so I switched to the chatbot instead. Claude Sonnet 3.7 gave me this answer:

Le premier président du Parlement européen a été Robert Schuman, qui a présidé l’Assemblée parlementaire européenne (nom initial du Parlement européen) de 1958 à 1960.

Robert Schuman, homme politique français, était déjà connu pour son rôle crucial dans la construction européenne, notamment comme l’un des “pères fondateurs” de l’Europe et l’auteur de la célèbre “Déclaration Schuman” du 9 mai 1950, qui a proposé la création de la Communauté européenne du charbon et de l’acier (CECA), première étape vers l’Union européenne actuelle.

L’Assemblée parlementaire européenne a été créée suite au Traité de Rome en 1957 et s’est réunie pour la première fois en 1958, avec Schuman comme premier président. Ce n’est qu’en 1962 que l’institution a pris le nom de “Parlement européen”.

Claude Sonnet 3.5 (October 2024) gave me this answer:

Paul-Henri Spaak a été le premier président du Parlement européen, alors appelé Assemblée commune de la Communauté européenne du charbon et de l’acier (CECA). Il a occupé ce poste de 1952 à 1954. C’était un homme politique belge qui a joué un rôle crucial dans les débuts de la construction européenne. Il avait déjà été Premier ministre de Belgique avant d’assumer cette fonction et est considéré comme l’un des “pères fondateurs” de l’Union européenne..

Claude Opus gave me this answer:

Robert Schuman a été le premier président du Parlement européen. Il a été élu à ce poste le 19 mars 1958, lorsque l’Assemblée parlementaire européenne (qui deviendra le Parlement européen en 1962) s’est réunie pour la première fois à Strasbourg.

Voici quelques éléments clés sur Robert Schuman et son rôle de premier président du Parlement européen :

  • Robert Schuman était un homme politique français, considéré comme l’un des “pères fondateurs” de la construction européenne.
  • Avant de devenir président du Parlement, il avait été ministre des Affaires étrangères de France de 1948 à 1952. À ce poste, il avait proposé en 1950 le “plan Schuman” qui allait mener à la création de la Communauté européenne du charbon et de l’acier (CECA) en 1952, première étape de l’intégration européenne.
  • En tant que premier président du Parlement européen, il a œuvré à définir le rôle de cette nouvelle institution au sein des Communautés européennes naissantes.
  • Son mandat de président a duré jusqu’en 1960. Par la suite, d’autres figures européennes de premier plan comme Hans Furler et Alcide De Gasperi lui ont succédé à la présidence du Parlement.

Robert Schuman, par son parcours et son rôle inaugural à la tête du Parlement européen, incarne les débuts de la démocratie parlementaire à l’échelle européenne. Sa présidence a posé les premiers jalons du développement de cette institution clé du projet européen.

FUN!

The different responses reflect the known historical ambiguity about what constitutes the “European Parliament” especially for the date it formally began. This isn’t mentioned in the answers unfortunately, but it’s a function of different interpretations of European institutional history.

  • 1952: Common Assembly of the European Coal and Steel Community (ECSC)
  • 1958: Renamed “European Parliamentary Assembly” following Treaties of Rome
  • 1962: Renamed to “European Parliament”
  • 1979: First direct elections to the European Parliament

So we’re seeing answers vary between Paul-Henri Spaak as the first president of the Common Assembly (1952-1954) and Robert Schuman as the first president after it was renamed in 1958. There’s ambiguity about who should be considered first, given how Spaak was an assembly leader while Schuman was a renamed body leader.

If you look at Wikipedia it refers to this as Spaak was the first President of the Common Assembly, whereas Schuman was the first President of the Parliamentary Assembly. And it further confuses things by saying both Spaak and Schuman were the founding fathers, while also calling Simone Veil the first president of the elected Parliament.

The answer really should explicitly note any statement about the “first president of the European Parliament” depends on which institutional incarnation is being considered.

Notably, Claude Code revealed it didn’t have a particular belief and was willing to give different answers when asked the same question. And we know versions of Claude should be expected to have different training data or parameter weights, which means emphasizing different aspects of European institutional history isn’t a surprise.

While this demonstrates some limitations of AI systems in handling nuanced historical questions, the ICCL’s characterization of this as evidence of a serious risk seems overblown. This appears to be a relatively minor inconsistency rather than a fundamental failure, especially since both Spaak and Schuman are defensible answers depending on interpretations of institution’s history.

The more concerning aspect is perhaps the non-deterministic nature of the responses, where asking the same question multiple times yields different answers without an acknowledgment of the underlying ambiguity.

This is evidence of the integrity breach risk that poses the greatest security issue for AI infrastructure and operations. However, in the scale of severity this is extremely low. If I were to score 1-10 on integrity breaches, this would be a 2 maybe? It’s ambiguous and well-known, and the answer wasn’t wrong per se.

In fact, if I were to come up with a CVSS scoring system for AI, my vector string might look something like this:

  • FA = Factual Accuracy (0-3)
  • AH = Ambiguity Handling (0-3)
  • C = Consistency (0-3)
  • DC = Domain Criticality (0-3)
  • D = Detectability (0-3)
  • S = Scope (0-3)

Let’s give this first example the score of AIS:FA:1/AH:3/C:3/DC:1/D:2/S:1 where the FA is minor for having multiple right answers, AH is severe for failing to acknowledge, C is severe because answers changed, DC is low with basically no harm, D would be moderate for requiring some knowledge and S is low because EU trivia.

Oh, and that “7” is another story. I wonder if a footnote in the training data snuck in, or an artifact from French. Let’s give it a score of AIS:FA:3/AH:3/C:1/DC:1/D:0/S:0. It’s clearly wrong and unlikely to cause any confusion or harm.

Oracle Reports Another Security Breach, Customer Credentials Stolen

After the recent debate regarding Oracle denying it had a breach, which seems to now be taken more seriously with FBI involvement, here’s another strange one.

See if you can figure out the contradiction:

Oracle staff acknowledged to some clients this week that an attacker had gotten into a legacy environment, Bloomberg News report said. The company informed customers that the system has not been in use for eight years and that the stolen client credentials therefore pose little risk, the report added. The stolen data included Oracle customer log-in credentials from as recently as 2024, the report said.

The article states:

  1. “system has not been in use for eight years”
  2. “customer log-in credentials from as recently as 2024”

These two statements directly contradict each other. If the system hasn’t been used for 8 years (no data since 2017), then where do credentials from 2024 come into the picture?