Here’s my response to Bruce’s review of Cyber War: The Next Threat to National Security and What to do About It by Richard Clarke and Robert Knake, HarperCollins, 2010:
I guess I should blog about this separately, and I have done so a little already, but here’s my take:
1) Clarke is great about warning us of yesterday’s windmills. The discussion has been public for a while now (since at least 1999) and money is being funneled into the congressional-military-industrial complex (original term preferred by Eisenhower). It’s not necessarily a bad thing, and he should be congratulated on this, but it’s time to update the story.
2) The (newish) risks he could warn about are related to a dimension of hyper-collaborative bonds and time-bound social groups. When people ask “who was behind stuxnet” they really should be asking who was *not* behind stuxnet. What Gonzales showed in spades is that special collaboration is the new nuke. Attribution is a pain and definition of foe is nearly impossible. This is part of what I tried to argue at RSA Europe — don’t ban crossbows; out-think the mercenaries. A government could seed a group with a dumb and attributable tool, for example, like LOIC; that makes definition of their foe easy, since they’ve tagged a group (even for future reference).
3) I asked Clarke how and why he brings up but does not compare the risk of a mechanical gas-pipe explosion in California with the cyber-alteration of uranium enrichment in Iran. He said it was because the latter is “so much more complex”. That indicates a common cybermistake to me — fear of the unfamiliar, rather than the likely or the severe. Maybe he can make a good case for the stuxnet severity, but I still don’t see it.
To me the cold and calculated assassination of the uranium enrichment scientists should have been in the press as much as stuxnet, no? Motorcyclists who stick a bomb to the door of a scientist and then ride away? How’s the treaty against that going?
Back to 2) there are many other examples of real (severe and likely) risk that need to be addressed, such as the impact from failing education and health of children. That’s why, turning his own model around, I wish Clarke spent less time on how to respond to printer fires and worms and more on new forms of attack prevention — why/how to keep youth from being recruited into (temporal social network) groups that will intentionally or even accidentally blow gas lines. Whether they use a wrench, ssh or java does not scare me as much as how easily they are misdirected.
A couple weeks ago Clarke wrote this update:
…because the attack on the Iranian nuclear facility got out into the wild and analyzed, it can now be used against the US by altering it slightly (changing the Zero Day and the SCADA system-PLC target). And we are completely vulnerable. Get it now? Think power grid black out.
Power grid blackout like the Northeast in 1965, New York City in 1977, West in 1996, Northeast in 2003, or something worse?
Lets assume this slightly altered Stuxnet is made; would it be any more likely than any of the other attacks that can cause a power grid blackout? I mean is the power grid only “completely vulnerable” to Stuxnet or is it already completely vulnerable to other attacks and we just do not see them yet? I am thinking of the San Bruno explosion again.