Category Archives: History

My 2024 LSE Commencement Speech

The Dogs of Cyberwar
A Lowly Hacker’s Warning
LSE Commencement 2024

Distinguished faculty, dear students, and those venture capitalists or intelligence agencies inevitably lurking in the back hoping to recruit our graduates into their latest ethical catastrophe:

Thirty years ago, I sat where you’re sitting now, though with considerably fewer people and a significantly more embarrassing haircut. Back then, I was the American oddity who lived day and night in the computer lab while my half-dozen classmates assembled in the Three Tuns, competing to see whose understanding of the Cuban Missile Crisis would solve all of humanity’s problems over another round of pints that cost a pound twenty each.

I spent considerable effort to get LSE on something new called the “World Wide Web” – a phrase now that sounds as charmingly dated as “information superhighway” or “freeze dried coffee”, which was by the way the only coffee you could find in London in 1993. Can you imagine a young American hacker stepping off a plane in London and realizing only too late I was expected to drink tea and write with a pen?

I almost immediately died from caffeine and keyboard withdrawal.

To keep calm and carry on I volunteered writing code to help a blind PhD student of political philosophy digitize his dozens of books into robotic speech. He taught me more about seeing the world clearly, learning page breaks didn’t really exist in our mind’s eye, than I ever taught him about data integrity flaws in OCR algorithms. However, I did save him from accidentally submitting his thesis with hidden instances of the letter S replaced by a number five — a substitution that in retrospect could have meant his analysis of Hobbe55tate of Nature would be credited for the invention of modern passwords.

As you might guess, I arrived here still a raw and immature sod raised on the dirt roads of rural America. When an LSE student repeatedly left their World War I essay about military vulnerability completely exposed on one of our four shared lab computers, the irony proved as irresistible as… relieving myself on a hidden electric fence back home. A risky temptation that I really should have resisted. After watching the pattern repeat daily with a stubborn predictability of the BBC weather forecast, I did what any country bumpkin would do facing an open barn door: I scattered pointed commentary about undefended positions throughout their work. Professor Stevenson, to my great relief, marked every single edit with a bright red circle, proving he dutifully read each word that we turned in — which is more than I could say for my fellow student about their own work.

Little did I know this kind of penchant for exposing vulnerability would become the perfect metaphor later in my career. Professor Kent, the best advisor anyone could ever ask for, encouraged me upon graduation to throw myself straight into the tech industry. And so I did. For three decades I’ve helped big and small organizations see vulnerabilities, in order to keep them grounded, to make their grandiose claims about safety less full of the stuff we knew in Kansas as meadow muffins. Or prairie pancakes.

As I learned quickly as a kid, one taste and you immediately knew it’s a good thing you didn’t step in it.

From LSE’s tiny computer lab up into the largest corporate skyscraper boardrooms, I have spilled gallons of red ink around tens of thousands or more of undefended positions. Although now stakes are rather higher than a student’s marked-down essay, and the giant hidden electric fences are incredibly more…well, shocking.

Let me explain.

Take Palantir, named without a trace of irony after Tolkien’s all-seeing stones that invariably corrupt those who use them. They pitched venture capitalists a “revolutionary” surveillance system to “predict and prevent terrorism.” Of course you can imagine how VCs’ eyes lit up with dollar signs, presumably the same way medieval merchants’ eyes lit up with dubloons at the prospect of selling torture devices to the Spanish Inquisition. “Think of the market opportunity!” they must have said. “Every Queen Isabel will want one!” Did you know studies today show that Palantir actually created the terrorists they promised to predict and prevent? The self licking ISIS cream cone is real.

Similarly, Tesla’s ‘Autopilot’ promised to end traffic deaths, then proceeded to invent entirely new ways for cars to kill people. It has achieved the remarkable distinction of making the Ford Pinto look like a triumph of safety engineering. Who needs a faulty gas tank when you have AI that can find entirely new ways to turn cars into crematoriums? Henry Ford may have won the Third Reich’s highest honor, but at least he didn’t try to rebrand his Dearborn Independent newspaper with a hakenkreuz and call himself Twittler.

You might think I’m being too glib about death or unfair to visionaries. “Surely,” you say, “their companies must have some redeeming qualities, like what about South Africans dreaming of turning Mars into New Rhodesia?” Well yes, I suppose in the same way the East India Company really streamlined the tea trade. Have you seen the grand old counting house? I noticed the gift shop doesn’t mention how they balanced their moral ledger. The problem isn’t that technology being assembled is unimpressive — it’s measuring who really pays for it.

Which brings me to why your, and my, LSE education matters more than ever. You see, the world is perhaps being affected by Silicon Valley today in the same way that Dresden was fire-bombed by some pioneering Palo Alto radar engineers. Tech desperately needs people who can spot the rather subtle differences between innovations and repackaged historical tragedies. They need people who, when presented with a “revolutionary” surveillance system called Bluesky, can say, “Ah yes, this is exactly like the Stasi, but with better UX design.”

You’ve been trained to see patterns that even the most brilliant engineers miss – not because they lack intelligence, but because they’ve never had to explain to Dr. Preston why Franco’s “move fast and break things” wasn’t about innovations in Jerry cans. You understand that every “disruption” has a history, every “innovation” a context, and every hot-rushed philosophy eventually breaks something rather important – like democracy, or human rights, or that quaint notion that public transit shouldn’t spontaneously combust.

Let me give you a current example. Are you aware of the thousands of networked autonomous vehicles quietly amassing at a former Cold War airfield outside Berlin? The press has cheered deforestation around the German capital as “Tesla’s biggest European output.” With your training, you might recognize this as rather like how France celebrated the Maginot Line as their biggest investment in concrete. We’re staring down the barrel of a cybernetic equivalent to Chekhov’s gun. Thousands of hackable vehicles in Act One, are going to cause chaos by Act Three.

You’re entering a world where technology companies have more power than most nations, yet demonstrate all the ethical sophistication of a first-year philosophy student discovering moral relativism. They need people who can see through the Silicon Valley doublespeak, who understand that “making the world a better place” often means “making ourselves richer at everyone else’s expense.”

When I left LSE directly to California, with only $50 and dried coffee crystals in my pockets, I thought I was leaving behind the rigorous historical thinking this institution taught me. Instead, I found it was my most valuable skill. While engineers around me focused on rapid valuation from throwaway ideas, I was trained to ask whether they should. And more importantly, I was trained to recognize when “unprecedented” innovation was actually a very precedented bad idea in a shiny new package.

At one point I sat in charge of software release gates that affected two billion users, navigating the dawn of modern mobile phones and gaming consoles. With an official title of “dedicated paranoid” I wore a t-shirt that simply said “why?” It turned out to be the most important question in Silicon Valley, though one that got me uninvited from a surprising number of launch events. Venture capitalists, I learned, prefer historical parallels to stop at the Wright brothers and skip the Hindenburg.

So, Class of 2024, as you leave these strangely sunny and bright, airy halls that I somehow remember as windowless and always wet from rain, please know that your historical training isn’t just about understanding mistakes in the past. It’s about recognizing when someone tries to repeat them while hoping nobody realizes. In a world where tech companies are speedrunning through every bad idea of the 20th century, we desperately need people who can find the causes of things to avoid every AI implementation becoming a case study in all our successor’s dissertations.

You have been trained to see through a growing fog of cyberwar, whether rising from hundreds of thousands of burning Model 3s attacking European cities or stuff spread by social media tycoons about their robots. Use your training in clarity of vision to improve society. The world needs your sharp tongue and sharper minds.

And to those venture capitalists in the back: yes, our graduates are available for hire. But I should warn you – they’ve been trained to spot patterns. Your term sheets look remarkably like Victorian labor contracts, just with time measured by TikToks.

Thank you, and congratulations.


Swasticars: Remote-controlled high-explosive vehicles stockpiled by Twittler outside Berlin.

Female Ghosts in the Machine: What Wollstonecraft Knew About AI in 1792

The ghosts of female philosophers haunt Silicon Valley’s machines. While tech bros flood Seattle and San Francisco in a race to claim revolutionary breakthroughs in artificial intelligence, the spirit of Mary Wollstonecraft whispers through their fingers, her centuries-old insights about human learning and intelligence echoing unacknowledged through their algorithms and neural networks.

1790 oil on canvas portrait by John Opie of philosopher Mary Wollstonecraft (1759-1797). Source: Tate Britain, London

In “A Vindication of the Rights of Woman” (1792), Wollstonecraft didn’t just argue for women’s education, she dismantled the very mechanical, rote learning systems that modern AI companies are clumsily reinventing at huge cost. Her radical vision of education as an organic, growing system that develops through experience and social interaction reads like a direct critique of today’s rigid, mechanical approaches to artificial intelligence.

The eeriest part? She wrote this devastating critique of mechanical thinking 230 years before transformer models and large language models would prove her right. While today’s AI companies proudly announce their discovery that learning requires social context and organic development, Wollstonecraft’s ghost watches from the margins of history, her vindication as ignored as her original insights.

Notable history tangent? She died from infection eleven days after giving birth to her daughter, who then went on to write Frankenstein in 1818 and basically invent science fiction.

When we look at modern language models learning through massive datasets of human interaction, we’re seeing Wollstonecraft’s philosophic treatises on organic learning scaled to the digital age.

David Hume’s philosophical contributions are also quite striking, given they’re 300-years old as well. His “bundle theory” of mind and identity reads like a prototype for neural networks.

When Hume argued that our ideas are nothing more than collections of simpler impressions connected through association, he was describing something remarkably similar to the weighted connections in modern AI systems. His understanding that belief operates on probability rather than certainty is fundamental to modern machine learning.

Every time an AI system outputs a confidence score, it’s demonstrating Hume’s predictive writing about our modern dependency on empiricism.

What’s particularly fascinating is how both thinkers rejected the clockwork universe model of their contemporaries. They saw human understanding as something messier, more organic, and ultimately more powerful than mere mechanical processes. Wollstonecraft’s insights about how social systems shape individual development are particularly relevant as we grapple with AI alignment and bias. She understood as a philosopher of the 1700s that intelligence, whether natural or artificial, cannot be separated from its social context.

The problem with our 1950s-style flowcharts that emerged from hard-fought victory in WWII isn’t just that they’re oversimplified, it’s that they represent a violent step backward from the sophisticated understanding of mind and learning that Enlightenment thinkers had already developed.

We ended up with such mechanistic models, simplistic implementations like passwords instead of proper messy heatmap authentication, because the industry was funded out of military-industrial contexts that too often prioritized command-and-control thinking over organic developments. TCP/IP and HTTPS were academically-driven exceptions to the Rochester-Stanford teams who fought hard to standardize on X.25, for example.

When Wollstonecraft wrote about the organic development of understanding, or when Hume described the probabilistic nature of belief, they were articulating ideas that would take computer science centuries to rediscover and apply as “novel” concepts divorced from all the evidence presented in social science.

As we develop AI systems that learn from social interaction, operate on probabilistic inference, and exhibit emergent behaviors, we’re not just advancing beyond the simplistic war-focused mechanical models of early computer science, we’re finally catching up to the insights of 18th-century philosophy. Perhaps the real innovation in AI isn’t about technology itself, but our acceptance of a particular woman’s more sophisticated understanding from 1792 of what intelligence really means.

The next frontier in AI not surprisingly won’t be found in more complex algorithms, but in finally embracing the full implications of what Enlightenment thinkers understood about the nature of mind, learning, and society. When we look at the most advanced AI systems today and where they are going with their fuzzy logic, their social learning, their emergent behaviors, we’re seeing the vindication of ideas that Wollstonecraft and Hume would have recognized immediately.

Unfortunately, the AI industry seems dominated by an American “bromance” that isn’t particularly inclined to give anyone credit for the ideas that are being taken, corrupted and falsely claimed as futuristic or even unprecedented. Microsoft summarily fired all their ethicists in an attempt to silence objections to OpenAI investment, not long before a prominent whistleblower about OpenAI turned up dead.

Nothing to see there, I’m sure, as philosophers rotate in their graves. We haven’t just forgotten the lessons of Enlightenment thinkers, the Sam Altmans and Mark Zuckerbergs may be actively resisting them in favor of a more controlled, corporatized, exploitative approaches to innovations with technology.

Let me give you an example of the kind of flawed and ahistoric writing I see lately. Rakesh Gohel posed this question on the proprietary, closed site, ironically called “LinkedIn“:

Most people think AI Agents are just glorified chatbots, but what if I told you they’re the future of digital workforces?

What if?

What if I told you the tick tock of Victorian labor exploitation practices and inhumane colonialism don’t disappear if you just rebrand them to TikTok and use camera phones instead of paper and pen? Just like Victorian factory owners used mechanical timekeeping to control workers, modern platforms use engagement metrics and notification systems to maintain digital control.

The eyeball-grabbing “digital workforce” framing that Gohel stumps is essentially reimagining a factory into APIs instead of steam engines and belts. Just as factory owners reduced skilled craftwork to mechanical processes, today’s AI companies are watering down complex social and cognitive processes into simple flowcharts that foreshadow their dangerous intentions. Gohel tries to sweeten his pitch using a colorful chart, which in fact illustrates just how fundamentally broken “AI influencer” thinking can be about thinking.

That, my fellow engineers, is a tragedy of basic logic. Contrasting a function call with a while loop… is like promoting 1950s-era computer theory at best. A check loop after you plan and do something! What would Deming say about PDCA, given that he was famous 50 years ago for touring the world lecturing on what this brand new chart claims to be the future?

The regression here goes beyond just technical architecture. When Deming introduced PDCA, he wasn’t just describing a feedback loop, he was promoting a holistic philosophy of continuous improvement and worker empowerment. The modern AI agent diagram strips away all of that context and social understanding, reducing it to the worst technical loop theory.

This connects back to the earlier point about Wollstonecraft, because the AI industry isn’t just ignoring 18th-century philosophy, it’s also ignoring 20th-century management science and systems thinking. The “what if” diagram presents as revolutionary what Deming would have considered decades ago a primitive understanding of systematic improvements in intelligence.

Why does the American tech industry keep “rediscovering” and selfishly-corrupting or over-simplifying ideas that were better understood and presented widely decades or centuries ago?

A quick back-of-napkin sketch you likely would never see in the current put-other-peoples-nose-to-grindstone American tech scene

Perhaps it’s because technically raw upwardly mobile privileged skids (TRUMPS) being expected to acknowledge any deep historical roots, such as giving any real credit to humanities or social science, would mean confronting the very harmful implications from their poorly constructed systems… which the world’s best philosophers like Wollstonecraft, Hume, and Deming have emphasized for hundreds of years.

The pattern is painfully clear — exhume a sophisticated philosophical concept, strip it to its mechanical bones, slap a technical name on it, and claim revolutionary insight. Here are just a few examples of AI’s philosophical grave-robbing:

  • “Attention Mechanisms” in AI (2017) rebranded William James’ Theory of Attention (1890). James described consciousness as selectively focusing on certain stimuli while filtering others in a dynamic, context-aware process involving both voluntary and involuntary mechanisms. The tech industry presents transformer attention as revolutionary when it’s implementing a stripped-down version of 130-year-old psychology.
  • “Reinforcement Learning” (2015) rebranded Thorndike’s Law of Effect (1898). Thorndike described how behaviors followed by satisfying consequences tend to be repeated, developing sophisticated theories about the role of context and social factors in learning. Modern RL strips this to pure mechanical reward optimization, losing all nuanced understanding of social and emotional factors.
  • “Federated Learning” (2017) rebranded Kropotkin’s Mutual Aid (1902). Kropotkin described how cooperation and distributed learning occur in nature and society, emphasizing knowledge development through networks of mutual support. The tech industry “discovers” distributed learning networks but focuses only on data privacy and efficiency, ignoring the social and cooperative aspects Kropotkin emphasized.
  • “Explainable AI” (2016) rebranded John Dewey’s Theory of Inquiry (1938). Dewey wrote about how understanding must be socially situated and practically grounded, emphasizing that explanations must be tailored to social context and human needs. Modern XAI treats explanation as a purely technical problem, losing the rich philosophical framework for what makes something truly explainable.
  • “Few-Shot Learning” (2017) rebranded Gestalt Psychology (1920s). Gestalt psychologists described how humans learn from limited examples through pattern recognition and developed sophisticated theories about how minds organize and transfer knowledge. Modern few-shot learning presents this as a novel technical challenge while ignoring deeper understanding of how minds actually organize and transfer knowledge.

These philosophical ghosts don’t just haunt our machines – they’re Wollstonecraft’s vindication made manifest, a warning echoing through centuries of wisdom. The question is whether we’ll finally listen to these voices from the margins of history, or continue pretending every thoughtless mechanical implementation of their ideas is cause to celebrate a breakthrough discovery. Remember, Caty Greene’s invention of the “cotton engine” or ‘gin (following her husband’s untimely death from over-exertion) came from intentions to abolish slavery, yet instead was stolen from her and reverse-patterned into the largest unregulated immoral expansion of slavery in world history. Today’s AI systems risk following the same pattern of automation tools intended to liberate human potential being corrupted into instruments of digital servitude.

Naively uploading” our personal data into any platform that lacks integrated ethical design or safe learning capabilities is more like turning oneself into a slave exploited by cruelty of emergent digital factory owners, rather than maintaining basic freedoms while connecting to a truly intelligent agent that can demonstrate aligned values. Agency is the opposite to what AI companies have deceptively been hawking as their vision of agents.

Gentlemen, You Can’t Dance to a Tesla Light Show: A Cold War Warning on Command & Control

Kubrick’s 1964 film “Dr. Strangelove” presented what seemed an absurdist critique of automation and control systems. While most bombers in the film could be recalled when unauthorized launches occurred, a single damaged bomber’s “CRM 114 discriminator” prevented any override of its automated systems – even in the face of an end-of-world mistake. This selective communication failure, where one critical component could doom humanity while the rest of the system functioned normally, highlighted the kind of dangerous fragility that necessitates tight regulation of automated control systems.

The film’s “discrimination” device, preventing override and sealing the world’s fate, was comical because it was the invention of a character portrayed as a paranoid conspiracy theorist (e.g. a fictional Elon Musk). The idea that a single point of failure in communications could trigger apocalyptic consequences was considered so far-fetched as to be unrealistic in the 1960s. Yet here we are, with Tesla rapidly normalizing paranoid delusional automated override blocks as a valid architectural pattern without any serious security analysis or public scrutiny.

Traditional automakers since the Ford Pinto catastrophe understand design risks intuitively — they build mechanical overrides that can not be software-disabled, showing a fundamental grasp of safety principles that Tesla has glowingly abandoned. In fact, other manufacturers specifically avoid building centralized control capabilities, not because difficulty, but because engineers should always recognize and avoid inherent risks — following the same precautionary principle that guided early nuclear power plant designers to build in physical fail-safes. However, the infamous low-quality high-noise car parts assembly company known as Tesla has apparently willfully recreated the worst architectural vulnerabilities at massive scale that threaten civilian infrastructure.

Most disturbing is how Tesla masks a willful destruction of societal value systems using toddler-level entertainment. The “Light Show” is presented as frivolous and harmless, much like how early computer viruses were dismissed as fun pranks rather than serious security threats that would come to define devastating global harms. But engineers know the show is not just plugging trivial LED audio response code into a car. What it actually demonstrates is a fleet-wide command and control system without sensible circuit breakers. It promotes highly-explosive chemical cluster bombs mindlessly following centrally planned orders without any independent relation to context or consequences. It turns a fleet of 1,000 Tesla into automation warfare concepts reminiscent not just of the Gatling gun or the Chivers machine gun of African colonialism, but the Nazi V-1 rocket program of WWII — a clear case of automated explosives meant to operate in urban environments that couldn’t be recalled once launched.

Finland 1940:

Threat? What threat? Soviet Foreign Minister Vyacheslav Molotov said he’s just airlifting food into Finland (Molotov’s “bread basket” technology — leipäkori — was in fact a cluster bomb. And yes, Finland was so anti-Semitic their air-force really adopted the hooked-X for their symbol. REALLY!)
26 Jan 1940: “…the civil defense chief has named ‘Molotov’s Bread Basket.’ …equipped with 3 winged propeller devices. Its contents are divided into compartments containing dozens of different incendiary and ignition bombs. When the propeller sets the torpedo into a powerful spinning motion, the bombs have opened from its sides and scattered around the environment. …the Russians are throwing bread to us in their own way.” Source: National Library of Finland

Finland 2024:


Threat? What threat? Musk says it’s just a holiday light show. These are all just Tesla food delivery vehicles clustered for “throwing bread to us in their own way” like the fire-bombing of winter 1939 again.

The timing of propaganda is no accident. Tesla strategically launches these demonstrations during holidays like Christmas, using celebratory moments to normalize dangerous capabilities. It’s reminiscent of the “Peace is our Profession” signs decorating scenes in Dr. Strangelove, using festive imagery to mask dangerous architectural realities.

British RAF exchange officer Mandrake in the film Dr. Strangelove. Note the automation patterns or plays surrounding the propaganda.

Tesla’s synchronized light shows, while appearing harmless, demonstrate a concerning architectural pattern: the ability to push synchronized commands to large fleets of connected vehicles with potentially limited or blocked owner override capabilities. What makes this particularly noteworthy is not the feature itself, but what it reveals about the underlying command and control objectives of the controversial political activists leading Tesla. The fact that Tesla owners enthusiastically participate in these demonstrations shows how effectively the security risk has been obscured — it’s a masterclass in introducing dangerous capabilities under the guise of consumer features.

More historical parallels? I’m glad you asked. Let’s examine how the Cuban Missile Crisis highlights the modern risks of automated systems under erratic control.

During the Cuban Missile Crisis, one of humanity’s closest brushes with global nuclear catastrophe, resolution came through human leaders’ ability to identify and contain critical failure points before they cascaded into disaster. Khrushchev had to manage not just thorny U.S. relations but also prevent independent actors like Castro from triggering automated response systems that could have doomed humanity. While Castro controlled a small number of weapons in a limited geography, today’s Tesla CEO commands a vastly larger fleet of connected vehicles across every major city – with demonstrably less stability and even more concerning disregard for fail-safe systems than Cold War actors showed.

As Group Captain Mandrake illustrated so brilliantly to audiences watching Dr. Strangelove, having physical override capabilities doesn’t help if the system can fail-unsafe and ignore them. Are you familiar with how many people were burned alive in Q4 2024 by their Tesla door handles failing to operate? More dead in a couple months than the entire production run of the Ford Pinto, from essentially the same design failure — a case study in how localized technical failures can become systemic catastrophes when basic safety principles are ignored.

Tesla’s ignorant approach to connected vehicle fleets presents a repeat of these long-known and understood risks at an unprecedented scale:

  • Centralized Control: A single company led by a political extremist maintains the ability to push synchronized commands to hundreds of thousands of vehicles or more
  • Limited Override: Once certain automated sequences begin, individual owner control may have no bearing regardless of what they see or hear
  • Network Effects: The interconnected nature of modern vehicles means system-wide vulnerabilities can cascade rapidly
  • Scale of Impact: The sheer number of connected vehicles creates potential for widespread disruption

As General Ripper in Dr. Strangelove would say, “We must protect our precious vehicular fluids from contamination.” More seriously…

Here are some obvious recommendations that seem to be lacking from every single article I have ever seen written about the Tesla “light discriminator” flashy demonstrations:

  1. Mandate state-level architectural reviews of over-the-air update systems in critical transportation infrastructure. Ensure federal agencies allow state-wide bans of vehicles with design flaws. Look to aviation and nuclear power plant standards, where mandatory human-in-the-loop controls are the norm.
  2. Require demonstrable owner override capabilities (disable, reset) for all automated vehicle functions — mechanical, not just software overrides
  3. Develop frameworks for assessing systemic risk in connected vehicle networks, drawing on decades of safety-critical systems experience
  4. Create standards for fail-safe mechanisms in autonomous vehicle systems that prioritize human control in critical situations

What Kubrick portrayed as satire — how a single failed override in an otherwise functioning system could trigger apocalyptic consequences — has quietly become architectural reality with Tesla’s rising threats to civilian infrastructure. The security community watches light shows while missing their Dr. Strangelove moment: engineers happily building systems where even partial failures can’t be stopped once initiated, proving yet again that norms alone won’t prevent the creation of doomsday architectures. The only difference? In 1964, we recognized this potential for cascading disaster as horrifying. In 2024, we’re watching people ignorant of history filming it to pump their social media clicks.

In Dr. Strangelove, the image of a single malfunctioning automated sequence causing the end of the world was played for dark comedy. Today’s Tesla demonstrations celebrate careless intentional implementations of equally dangerous architectural flaws.

60 years of intelligence thrown out? It’s as if dumb mistakes that end humanity are meant to please wall street, all of us be damned. Observe Tesla propaganda as celebrating the wrong things in the wrong rooms — again.

Clausewitz Paradox: When Thinking About Thinking Becomes Routine

Military professionals love a good Clausewitz discussion, especially looking at this past week in Syria. His trinity of people, army, and government has become almost liturgical. It’s the kind of a comfortable framework we apply to everything from counterinsurgency to cyber warfare. But there’s an irony here that Clausewitz himself might appreciate: Our very reliance on his framework demonstrates the human tendency to turn dynamic thinking into static routine.

Perhaps Clausewitz’s best insight, not unlike what has been found in every other profession in the world, was that warfare exists in constant tension between:

  • What can be systematized (tactics, drills, logistics)
  • What requires judgment (strategy, adaptation, creativity)

But here’s the meta-lesson: The way we invoke Clausewitz has itself become a routine. We’ve turned his warning about the dangers of routine thinking into… a routine way of thinking.

The crystallization of dynamic thought into static procedure appears like a pattern everywhere in human endeavor. Scientific methods become checklist science. Medical diagnosis becomes search engine symptom matching. Strategic planning becomes fill-in-the-blank templates.

The true lesson of Clausewitz thus shouldn’t be reduced to his trinity or his maxims. It comes from recognizing a balance that is often lost, that even our frameworks for handling complexity can become cognitive crutches. His work should be a cradle for military thought, not its grave. The moment we think we’ve fully understood Clausewitz is the moment we’ve missed his point entirely.

I submit that the best way to honor Clausewitz is to recognize when we need to move beyond him, as he argued that each age must write its own book about war. The most dangerous routine might be our routine ways of thinking about how to avoid routine thinking.

The Journal of the United States Artillery once put it like this:

Source: Journal of the United States Artillery, Volume 81, Page 293, 1938

This quote perfectly captures a recursive rule about not following rules slavishly. And the source makes it even more powerful: Grant often was criticized by his contemporaries for being “unscientific” and not following accepted military wisdom, yet he was unquestionably the most successful general of the Civil War, if not all American history.

Even the way we think about thinking needs to avoid becoming dogmatic. The real art is maintaining the tension between structure and adaptability, knowing enough to be competent but remaining flexible enough to be creative.

I’ve heard this as the healthy mental river flow, where we must avoid becoming tangled upon either bank. One is chaotic and forever giving way, the other is rigid and unforgiving. The irony is that this too could become a rigid formula if we’re not careful!

And for what it’s worth, the seditious Confederate General Lee’s rigid adherence to offensive doctrine, a fixation on decisive Napoleonic style measurements, led to several catastrophic decisions.

  • Favored aggressive offense to expand slavery, instead of the defensive tactics that were far more strategic to preserve slavery
  • Focused on his personal stake in Virginia theater operations despite the war’s center of gravity shift west
  • Continued agitating for decisive battle outcomes even after Gettysburg showed this fatally flawed

Grant, by contrast, showed remarkable adaptability and thinking 100 years ahead of his time.

When Grant encountered a problem at Vicksburg, he didn’t just try a different tactical approach, he totally innovated into what was possible. After failed frontal assaults, he executed one of the most audacious campaigns in military history: he marched his army down the western bank of the Mississippi, ran gunboats and transport ships past the Confederate batteries at night (a move considered suicidal), crossed back to the eastern bank well south of Vicksburg, and then lived off the land while cutting loose from his supply lines entirely.

This was mind-bending for the era. Armies were supposed to maintain their supply lines at all costs. Instead, Grant’s troops carrying just five days of rations marched through enemy territory for two weeks, fighting five major battles and confounding both the Confederates and his own superiors. When Lincoln heard of this, he said:

I think Grant has a thought. He isn’t quite sure about it, but he has it.

At Cold Harbor, after suffering heavy casualties in frontal assaults (observing them as mistakes), Grant didn’t retreat to lick his wounds like his predecessors. Instead, he secretly moved his entire army across the James River — a force of 100,000 men with wagons, artillery, and supplies — using a 2,100-foot pontoon bridge. The Confederates didn’t even realize he’d gone until his army was threatening Petersburg.

The Overland Campaign showed Grant’s grasp of both operational art and psychology. Previous generals had retreated after tangling with the “monster” Lee. Grant, instead, kept moving southeast. After each battle, his troops expected to retreat north. Instead, they’d get orders to advance by the left flank. This persistent southward movement had a profound psychological effect on both armies. Union troops began to see they were finally heading toward Richmond, while Confederate troops realized this enemy wasn’t going to quit at first bluster.

Even his staffing choices showed innovation. While other generals relied on West Point graduates, Grant promoted talented officers regardless of background. He elevated leaders like William Smith (originally a civilian vigneron) and James Wilson (who became a cavalry commander at 26) based on demonstrated ability rather than formal education. Perhaps due to his own “self-made” background, he dismissed patronage as irrelevant to performance.

Then there was his approach to intelligence gathering. Rather than relying solely on cavalry scouts and spies, Grant made extensive use of freed slaves’ knowledge of local geography and Confederate movements. This wasn’t just innovative, it echoed his dedication to human value and talent as transformative, recognizing the strategic value of local knowledge that others ignored due to racism.

These weren’t just tactical innovations, they represented a flexible yet practical way of thinking about the world. A fundamentally different path than what came before.

Lee remained fixated on winning decisive battles in a Napoleonic style, while Grant grasped how the Civil War was changing everything, becoming what we’d now call a “total war,” requiring an operational art that combined military, political, and economic elements… not unlike what we’ve seen in Syria lately.

The campaign that best exemplifies Grant’s touch of transformation was his strategic March to the Sea led by Sherman. While Lee was still obsessing about sitting in his tent for his boots to be shined for future battlefield glory, Grant understood that Confederate resistance depended on both military force and civilian will. The March to the Sea was about demonstrating the Confederacy’s aggression as weakness, revealing an inherent inability to protect itself.

Grant had likely not been exposed to Clausewitz, but the Prussian theorist would have recognized in Grant’s strategy the targeting of the enemy’s center of gravity the key to his resistance.

The rise of cyberwarfare, AI, and hybrid warfare demands the kind of adaptable systemic thinking Grant exemplified rather than Lee’s routine and doctrinaire (e.g. racist) approach. So the next time someone waves an ISIS or Confederate flag, just think about it… because it stands as evidence they don’t.