The Dogs of Cyberwar
A Lowly Hacker’s Warning
LSE Commencement 2024
Distinguished faculty, dear students, and those venture capitalists or intelligence agencies inevitably lurking in the back hoping to recruit our graduates into their latest ethical catastrophe:
Thirty years ago, I sat where you’re sitting now, though with considerably fewer people and a significantly more embarrassing haircut. Back then, I was the American oddity who lived day and night in the computer lab while my half-dozen classmates assembled in the Three Tuns, competing to see whose understanding of the Cuban Missile Crisis would solve all of humanity’s problems over another round of pints that cost a pound twenty each.
I spent considerable effort to get LSE on something new called the “World Wide Web” – a phrase now that sounds as charmingly dated as “information superhighway” or “freeze dried coffee”, which was by the way the only coffee you could find in London in 1993. Can you imagine a young American hacker stepping off a plane in London and realizing only too late I was expected to drink tea and write with a pen?
I almost immediately died from caffeine and keyboard withdrawal.
To keep calm and carry on I volunteered writing code to help a blind PhD student of political philosophy digitize his dozens of books into robotic speech. He taught me more about seeing the world clearly, learning page breaks didn’t really exist in our mind’s eye, than I ever taught him about data integrity flaws in OCR algorithms. However, I did save him from accidentally submitting his thesis with hidden instances of the letter S replaced by a number five — a substitution that in retrospect might have meant his analysis of Hobbe5‘ 5tate of Nature would be credited for the invention of modern passwords.
As you might guess, I arrived here still a raw and immature sod raised on the dirt roads of rural America. When an LSE student repeatedly left their World War I essay about military vulnerability completely exposed on one of our four shared lab computers, the irony proved as irresistible as… relieving myself on a hidden electric fence back home. A risky temptation that I really should have resisted. After watching the pattern repeat daily with a stubborn predictability of the BBC weather forecast, I did what any country bumpkin would do facing an open barn door: I scattered pointed commentary about undefended positions throughout their work. Professor Stevenson, to my great relief, marked every single edit with a bright red circle, proving he dutifully read each word that we turned in — which is more than I could say for my fellow student about their own work.
Little did I know this kind of penchant for exposing vulnerability would become the perfect metaphor later in my career. Professor Kent, the best advisor anyone could ever ask for, encouraged me upon graduation to throw myself straight into the tech industry. And so I did. For three decades I’ve helped big and small organizations see vulnerabilities, in order to keep them grounded, to make their grandiose claims about safety less full of the stuff we knew in Kansas as meadow muffins. Or prairie pancakes.
As I learned quickly as a kid, one taste and you immediately knew it’s a good thing you didn’t step in it.
From LSE’s tiny computer lab up into the largest corporate skyscraper boardrooms, I have spilled gallons of red ink around tens of thousands or more of undefended positions. Although now stakes are rather higher than a student’s marked-down essay, and the giant hidden electric fences are incredibly more…well, shocking.
Let me explain.
Take Palantir, named without a trace of irony after Tolkien’s all-seeing stones that invariably corrupt those who use them. They pitched venture capitalists a “revolutionary” surveillance system to “predict and prevent terrorism.” Of course you can imagine how VCs’ eyes lit up with dollar signs, presumably the same way medieval merchants’ eyes lit up with dubloons at the prospect of selling torture devices to the Spanish Inquisition. “Think of the market opportunity!” they must have said. “Every Queen Isabel will want one!” Did you know studies today show that Palantir actually created the terrorists they promised to predict and prevent? The self licking ice cream cone is real.
Similarly, Tesla’s ‘Autopilot’ promised to end traffic deaths, then proceeded to invent entirely new ways for cars to kill people. It has achieved the remarkable distinction of making the Ford Pinto look like a triumph of safety engineering. Who needs a faulty gas tank when you have AI that can find entirely new ways to turn cars into crematoriums? Henry Ford may have won the Third Reich’s highest honor, but at least he didn’t try to rebrand his Dearborn Independent newspaper with a hakenkreuz and call himself Twittler.
You might think I’m being too glib about death or unfair to visionaries. “Surely,” you say, “their companies must have some redeeming qualities, like what about South Africans dreaming of turning Mars into New Rhodesia?” Well yes, I suppose in the same way the East India Company really streamlined the tea trade. Have you seen the grand old counting house? I noticed the gift shop doesn’t mention how they balanced their moral ledger. The problem isn’t that technology being assembled is unimpressive — it’s measuring who really pays for it.
Which brings me to why your, and my, LSE education matters more than ever. You see, the world is perhaps being affected by Silicon Valley today in the same way that Dresden was fire-bombed by some pioneering Palo Alto radar engineers. Tech desperately needs people who can spot the rather subtle differences between innovations and repackaged historical tragedies. They need people who, when presented with a “revolutionary” surveillance system called Bluesky, can say, “Ah yes, this is exactly like the Stasi, but with better UX design.”
You’ve been trained to see patterns that even the most brilliant engineers miss – not because they lack intelligence, but because they’ve never had to explain to Dr. Preston why Franco’s “move fast and break things” wasn’t about innovations in Jerry cans. You understand that every “disruption” has a history, every “innovation” a context, and every hot-rushed philosophy eventually breaks something rather important – like democracy, or human rights, or that quaint notion that public transit shouldn’t spontaneously combust.
Let me give you a current example. Are you aware of the thousands of networked autonomous vehicles quietly amassing at a former Cold War airfield outside Berlin? The press has cheered deforestation around the German capital as “Tesla’s biggest European output.” With your training, you might recognize this as rather like how France celebrated the Maginot Line as their biggest investment in concrete. We’re staring down the barrel of a cybernetic equivalent to Chekhov’s gun. Thousands of hackable vehicles in Act One, are going to cause chaos by Act Three.
You’re entering a world where technology companies have more power than most nations, yet demonstrate all the ethical sophistication of a first-year philosophy student discovering moral relativism. They need people who can see through the Silicon Valley doublespeak, who understand that “making the world a better place” often means “making ourselves richer at everyone else’s expense.”
When I left LSE directly to California, with only $50 and dried coffee crystals in my pockets, I thought I was leaving behind the rigorous historical thinking this institution taught me. Instead, I found it was my most valuable skill. While engineers around me focused on rapid valuation from throwaway ideas, I was trained to ask whether they should. And more importantly, I was trained to recognize when “unprecedented” innovation was actually a very precedented bad idea in a shiny new package.
At one point I sat in charge of software release gates that affected two billion users, navigating the dawn of modern mobile phones and gaming consoles. With an official title of “dedicated paranoid” I wore a t-shirt that simply said “why?” It turned out to be the most important question in Silicon Valley, though one that got me uninvited from a surprising number of launch events. Venture capitalists, I learned, prefer historical parallels to stop at the Wright brothers and skip the Hindenburg.
So, Class of 2024, as you leave these strangely sunny and bright, airy halls that I somehow remember as windowless and always wet from rain, please know that your historical training isn’t just about understanding mistakes in the past. It’s about recognizing when someone tries to repeat them while hoping nobody realizes. In a world where tech companies are speedrunning through every bad idea of the 20th century, we desperately need people who can find the causes of things to avoid every AI implementation becoming a case study in all our successor’s dissertations.
You have been trained to see through a growing fog of cyberwar, whether rising from hundreds of thousands of burning Model 3s attacking European cities or stuff spread by social media tycoons about their robots. Use your training in clarity of vision to improve society. The world needs your sharp tongue and sharper minds.
And to those venture capitalists in the back: yes, our graduates are available for hire. But I should warn you – they’ve been trained to spot patterns. Your term sheets look remarkably like Victorian labor contracts, just with time measured by TikToks.
Thank you, and congratulations.