Here’s a strange paper from The Tenth Workshop on Economics of Information Security (WEIS 2011). It’s called “Who Should be Responsible for Software Security”
I think their definition of zero-day is very lacking, to begin with:
A zero-day attack is defined as “an exploit, worm or a virus capable of crippling global web infrastructure either prior to, or within hours of, a public announcement of a computer system vulnerability†(McBride 2005).
Their definition is from a Computerworld article in 2005 by Siobhan McBride. Here is the part they cut off from the beginning of the original sentence:
While definitions of a zero day attack vary, it is generally considered to be…
Generally considered? In 2005? There is nothing to support the definition; no reference or study cited.
Public announcement and global severity seems to be the key factors in the McBride general definition. Reversing it shows how it fails a common sense test:
If an attack does not cripple global web infrastructure prior to, or within hours of, a public announcement can it still be a zero-day?
Stuxnet would be one obvious attack that many call a zero-day, which fails the definition.
I would rather see (as I said at the last BaySec meeting) a zero-day definition that is more along the lines of other industries and risk concerns like healthcare and energy where the measures are severe and unknown impact. It is characterised by extended resources that must be directed to figuring out what is happening because existing controls are ineffective and new ones have to be developed to detect and prevent it.
But I’ll go along with their definition for the sake of argument. There are other parts of the paper I find troubling.
Take this analysis of software security standards, for example:
…when zero-day attacks are more common events, the social benefits with security investment regulation become depressed. Since the risk associated with this type of attack cannot be shed by proper patch maintenance, it tends to get managed by significant reductions in usage which serve to control the network externalities. When usage is generally low, the social impact of security investment becomes marginalized
This is based on a strange assumption. They define zero-day attacks as something that lacks a patch. They define security investments as…I’m not sure. I could not find their definition, but it seems to be limited to a patch. No wonder, then, they conclude that investing in a patch becomes “depressed” for events that are not affected by a patch.
Their paper seems to boil down to a false argument known as a tautology.
Moving their argument to another risk scenario illustrates why this is wrong. Investing in security for dams, such as checking for causes of failure, will affect the ability of the dam to withstand floods and pressure never seen before. Should regulators require investment in security practices for building dams or should they say patches on dams do not prevent failure, therefore no incentive exists for investment?
Investing in patches for a poorly designed product is just one control option. It is not clear why the authors fixate on it as the only option for investment. It’s obvious that patches are not a good investment once the water is running over the top of a dam, but investing in dam designs that can be more resilient to flooding (i.e. spillways)…that’s a sound investment.
It’s a long and detailed paper that reads well written except for their ongoing dance around a tautology of patches.
They do not diminish the argument that security investments can be far more potent than patches, and regulators can increase the quality of products by making vendors responsible for poor design practices. It still stands to reason that regulators can have an impact on quality, which will help reduce the frequency of so-called zero-day flaws (e.g. the long tail of SMB and CVE-2011-0654).