A CFO.com interview with Mark Russinovich is funny. Mark’s done a lot of really stellar work on technical issues but the interview reveals more about his social and economic philosophy. They introduce him by comparing him to Edison. They probably meant this as a compliment.
Russinovich is to rootkits as Edison was to electricity
In order to make sure that [the elephant] emerged from this spectacle more than just singed and angry, she was fed cyanide-laced carrots moments before a 6,600-volt AC charge slammed through her body. Officials needn’t have worried. [The elephant] was killed instantly and Edison, in his mind anyway, had proved his point.
Edison proved that power can be dangerous in the hands of the wrong man. And I’m not even going to rant here about how he copied others’ ideas and tried to patent them as his own.
If I were Mark I’d be insulted. They should have used Westinghouse, Tesla or some other more notable engineer and inventor. Anyone but Edison.
The bottom line is that unless Mark is going to start launching rootkits on Apple computers to convince people to buy Microsoft, he shouldn’t be compared to Edison. Now to the interview:
Does the Internet make the world a more dangerous place?
It’s the complete dependence on the Internet. Even small businesses. Think about it. You go to your doctor or your dentist. What would happen if their computer wasn’t working? What would happen if their data was destroyed? They’d be out of business.
Scary. But strangely enough I have been in doctor and dentist offices when their computer is not working and they do not seem panicked about going out of business. If anything they seem content to say something like “damn Microsoft systems are always broken”.
Where is this complete dependence Mark speaks about? Maybe my experience is behind the times and he goes to a more modern clinic with robots and all. Does Roomba make a dental unit yet? He certainly is not speaking about “the world” of doctors and dentists I know. They embrace technology while keeping it at arms length.
Next example:
How much damage can a virus realistically do?
It’s pretty well accepted that the Stuxnet virus, which was spread by USB keys, was created by Israel and the U.S. for the sole purpose of destroying the Iranian centrifuges that enriched uranium for its nuclear program.
Pretty well accepted? That sounds like he’s not convinced. I know I’m not convinced. The world was pretty well accepted as flat. Very different from saying that it has been proven.
Mark, who is known for providing tools that are meant to offer hard evidence, is offering us a vague and unconfirmed statement when it comes to Stuxnet authors and purpose. Why does he lower the bar? Imagine if Sysinternals released a tool that said “this virtual memory region is pretty well accepted to be unusable”.
And how much damage did Stuxnet really do? The report I read from nuclear investigators who know the risk/threat model says the centrifuges were already expected to fail at a high rate. Rust is said to have been a major source of centrifuge problems for the Iranian program. Oxidation is apparently causing more failures than Stuxnet, so the answer is…?
I could go on about his answers on the cloud and a bank versus house model of risk but I’ll skip to the conclusion.
What’s the endgame?
We’re not going to take cybersecurity seriously enough until something real bad happens, and then we’ll overreact. That’s the way things usually happen, isn’t it? When something real bad happens, the government will step in and say now we’ve got to do something, and they’ll put in all these bizarre regulations that won’t really do much and will result in a big loss of productivity.
That’s like describing a doctor who treats you after you break your legs as the cause of your mobility loss. The doctor will regulate your ability to get up and walk around to protect your body from further harm — increasing your chance of long-term recovery and regaining productivity. What is so bizarre about that?
After the “something real bad happens” (e.g. breaking your legs) a loss of productivity is already in play, long before any regulator shows up to help. In other words, a huge gain in productivity from regulations is also conceivable if we’re just going to talk theory. Basel II comes to mind… Now, if Mark wants to mention a specific “bizarre” regulation and how it hurts productivity, then we can really talk about technical details instead of just some socioeconomic philosophy.