Why a Cyber Pearl Harbor Will Never Happen

The easy answer is really a semantic one: nothing that can be done in cyber (information technology) is directly comparable to widespread kinetic destruction of military forces.

Once something approaches that level of destructive force, it’s no longer really the domain of cyber. In other words we don’t really call it a voice attack if someone speaks into a microphone instead of turning keys to launch nuclear weapons. As the 1941 book “War on the Short Wave” put it on page 69:

Gunpowder it it is said, was first used as a holiday crackers. Radio in the early days operated to give men pleasure. Both have been turned to use in wars and nations have used broadcasting as an ally of the bomb.

Source: War On The Short Wave, 1941

Ally of the bomb. Not the bomb.

More seriously, the problem lies in the psychological power of the narrative. Despite basic early indicators, the attack on Pearl Harbor came as a “bolt out of the blue” on a major military target.

Their duty done, George, who was new to the unit, took over the oscilloscope for a few minutes of time-killing practice. […] Their device could not tell its operators precisely how many planes the antenna was sensing, or if they were American or military or civilian. But the height of a spike gave a rough indication of the number of aircraft. And this spike did not suggest two or three, but an astonishing number—50 maybe, or even more. “It was the largest group I had ever seen on the oscilloscope,” said Joe.

It was just past 7 in the morning on December 7, 1941 when the US failed to recognize over 300 Japanese planes about to unleash massive devastation on the Navy.

Take now for example a modern nuclear weapon that delivers in less than half an hour a surprise attack using an intercontinental missile.

Such a surprise on the right targets might prevent any kind of counter-strike. That is an apt framing for lightning dropping out of a clear blue sky and zapping capabilities.

As I’ve documented here before, however, it’s been a VERY long road since at least the 1970s telling us that a normative situation of information technology is more like continuous grinding attacks everywhere all the time.

Andrew Freedman writes about this phenomenon as “more like a hill we’re sliding down at ever-increasing speed”.

We can choose to alter course at any time by hitting the brakes…. But the longer we wait, the faster we’ll be traveling, and the more effort it will take to slow down and achieve the cuts that are needed. And we’ve already waited a long time to start pumping the brakes.

Please note, this is NOT to be confused with a slippery slope, which implies there are no brakes and thus is a fallacy.

It’s pretty much the opposite of Pearl Harbor as a narrative — a never-ending thunderous grey downpour leading to increasing rate and scope of failures. There is no bolt from blue, no sudden wake-up event without warning.

Imagine Pearl Harbor being told to you as a story about constant rust forming on ships that also have a problem with petty theft and the occasional targeted adversary. Sound different? THAT is cyber.

Otherwise wouldn’t any event such as this one rise to became a Pearl Harbor?

Eighty percent of email accounts used in the four New York-based US Attorney offices were compromised [by Russian military intelligence].

We’d be talking about tens of thousands of Pearl Harbor events each year (when in reality who even remembers the Code Spaces cloud breach of 2014 instantly putting them out of business). Or let me put it this way: for nearly half the years since Pearl Harbor the US has talked about a Cyber Pearl Harbor.

If anything, 2016 was it and even that was more like a poorly done coup than a destructive bombing preventing counterattack.

My main quibble with my own argument here is the poor quality practices of companies like Uber and Tesla. Nobody needs to be sending intercontinental missiles to America when they can remotely automate widespread carjacking instead.

Take that kind of bad engineering and maliciously route 40,000 cars in an urban center and you’ve got a surprise mass casualty event via information technology vulnerabilities… which sounds an AWFUL lot like a bolt out of the blue when you look at tens of thousands of highly-explosive Teslas being adversarial dive-bombers loitering about stealthily just waiting to happen.

The counterargument to my counterargument is that Tesla has been killing a LOT of people, being less safe after installing fraudulent “autopilot”, and at least 3X more likely than comparable cars to kill its driver. We won’t see a Pearl Harbor even in driverless when Tesla is allowed to continue normalizing devastating crashes and ignore its mounting death tolls.

Anyway, all this debate about the relevance of Pearl Harbor has come up again in another article, which bizarrely claims a negative: that we didn’t see the lack of a cyber Pearl Harbor coming.

Over the past decade, cyber warfare has changed in ways the experts didn’t see coming.

Let me say that again. They’re suggesting we didn’t see a lack of Pearl Harbor attack, when that is exactly what we saw (those predicting a bolt of blue always faced opposition).

I mean their point is just flatly false.

As an expert (at least to some, hi mom!) in both cyber and military history I absolutely saw today’s situation coming and gave numerous very public talks and comments about it.

Hell, (to paraphrase military icons in movies) I even gave a presentation in 2012 dedicated to cyber warfare that predicted a lot of what mass media just started talking about now.

Meh.

The article goes on to say experts didn’t predict that laying networks into repressive regimes would increase repression, yet again that is false. Early reports said exactly that. It wasn’t rocket science.

You deliver into a power vacuum shiny new tools (let’s say a pitchfork, for example) and want to believe optimists that it won’t be used as a weapon or lead to oppression. Because why?

History and political science as a guide told us the opposite would come and that’s exactly what we’ve seen.

“Quitting” When Unready: A Curious Case of Sleep Loss

The Air Force is having a moment regarding a decision to abort an exercise due to sleep loss.

“If it was a real world sortie, I can guarantee that those crews would get their energy drinks of choice, roll out to the plane, and fly to defend our nation,” he said. “I don’t know of any E3 member that would deny a flight if the Russians were coming no matter their state of rest. So in wartime, our asses would be flying and we would gladly do it. But this wasn’t real world. It was an exercise. You can’t replace the lives that would be lost if a plane went down.”

Smart move to cancel the exercise, I have no doubt from the details revealed so far… and this reminded me of two things.

First, recent neuroscience studies of mental and physical well-being showing clear degradation from sleep loss.

Three consecutive nights of sleep loss can have a negative impact on both mental and physical health. Sleep deprivation can lead to an increase in anger, frustration, and anxiety. Additionally, those who experienced sleep loss reported a change in physical wellbeing, including gastrointestinal and respiratory problems.

Second, I keep seeing leaders who accommodate rest and recuperation get criticized as “quitting”, which seems totally counter-intuitive.

If you don’t “quit” to eat and drink, the body risks even bigger shutdown. If you don’t “quit” to heal from injury you may fail to heal and cause wider injury. If you don’t “quit” to sleep… disaster.

Knowing when to not do something could be as important as knowing when to do it.

Somehow a blind and unthinking version of “don’t quit” (urging people to damage themselves in ways they can not continue anyway) is growing out of control to a point where people are using social media platforms to push others off cliffs instead of stopping/quitting to consider obvious consequences of such a predictable failure.

Even more complicated than sleep loss are the “twisties” as noted recently in Olympic gymnastics:

“We also do a lot of work to teach them how to listen to their bodies’ warning signs that they are heading down the wrong path,” he continued. Andrews noted that Biles had more stressors than most, being forced to represent USA Gymnastics, the institution that enabled her sexual abuse by Larry Nassar, because it’s the only pathway to the Games. …getting past the twisties can take time, sometimes days, weeks or even months to resolve. “This isn’t as easy to fix as just sleeping it off and hoping for a better day tomorrow,” one former gymnast and diver pointed out on Twitter. […] The worst case scenario isn’t a lost competition or even a serious injury, like a ruptured Achilles. In gymnastics, it can result in paralysis, or even death.

Getting well to avoid death is a form of “quitting” only in the sense it’s taking a very wise step to ensure survival and thus continuation. The case of Biles is especially telling because it is about a black woman who had been forced into sexual abuse.

Biles clearly has declared self-control over her own body in a multitude of ways. This latest demonstration is surely inspiring others to think about mental as well as physical success. Her stepping aside allows her also to be in a better place to help/support her team to succeed than if she experienced catastrophic failure. It’s a very wise choice demonstrating excellent leadership qualities, and something I expect any special operations team would recognize.

From that a number of white men seem to be upset and hyperventilating publicly about her “quitting”; issuing completely tone-deaf comments that a black woman be forced to do what they want instead.

So I encourage people to read about the USAF and then the Olympics to think about the parallels. Did they quit, or did they refuse to quit by taking a safety break?

Simon Sinek says we should start calling it “falling” instead of “failing” (let alone quitting) because it implies we get up again:

Porsche “Adaptive Cruise” Safety Model

A new graphic from the Porsche newsroom is an excellent example of what I’ve been calling the gap between the ERM (easy, routine, minimal judgment) and ISEA (identify, store, evaluate, adapt) functions for every form of “intelligence”.

Source: Porsche

Data on “infrequent maneuvers” caught my eye in particular. I find it misleading to try and frame observations in the loop by frequency.

We might stop infrequently on every road (even city blocks tend to give more time rolling than stopping) yet stopping is due to the events that matter most to our survival (e.g. intersections, obstacles).

In fact, if you look at Dan Ford’s dissertation about John Boyd (inventor of the famous OODA loop — observe, orient, decide, act) we’re reminded “infrequent maneuvers” might be best framed as our constant reality (Page 50):

As Antoine Bousquet summarizes John Boyd’s thinking in The Scientific Way of Warfare, “Boyd believes in a perpetually renewed world that is ‘uncertain, ever-changing, unpredictable’ and thus requires continually revising, adapting, destroying and recreating our theories and systems to deal with it.” Grant Hammond expresses it this way: “Ambiguity is central to Boy’d vision … not something to be feared but something that is a given…. We never have complete and perfect information. We are never completely sure of the consequences of our actions…. The best way to success … is to revel in ambiguity.”

There’s of course an extremely high cost of revelation in ambiguity, versus the low-cost of routines. But the point should still could be taken that framing an expected risk as an infrequent one is a dangerous game to play.

Back to the Porsche newsroom, my favorite image is actually this one:

Source: Porsche

The detection illustrated here is exactly the same as I documented extensively and presented in 2016 with regard to Tesla sensor and learning failures (a tragic foreshadowing of Brown’s death just weeks after his lane change incident).

New Coyote Anti-Swarm Missile Straight Out of 1953

The first operational anti-aircraft missile system, the Nike Ajax, was launched by the United States in 1953.

A new guided missile system was needed which could destroy entire formations of high-altitude, high-speed aircraft at a greater ranges with a single missile. After extensive studies, it was determined that this new system would require the use of a nuclear warhead in a new missile having greater range and speed than the Nike-Ajax missile.

Fast-forward to today and Raytheon PR announces an anti-swarm missile system, the Block 3 Coyote, has “aced” a military test.

Block 3 utilizes a non-kinetic warhead to neutralize enemy drones, reducing potential collateral damage.

To be fair, Raytheon distinguishes the Block 3 as a reusable model, unlike the Block 2.

Unlike its expendable counterpart, the non-kinetic variant can be recovered, refurbished and reused without leaving the battlefield.

It’s interesting to differentiate it in the PR as non-kinetic, given how it probably has a kinetic effect (e.g. waves of power destroying or disabling electronics).

Also it’s not really fair to say a kinetic platform can’t be reusable, since that’s a design decision (e.g. explosive warhead could be launched like planes do with missiles).

I suspect someone demanded a lower-cost profile on the Coyote and marketing came up with the language to make a false distinction from the earlier design.