The Beginning Wasn’t Full-Disclosure

An interesting personal account of vulnerability disclosure called “In the Beginning There was Full Disclosure” makes broad statements about the past.

In the beginning there was full disclosure, and there was only full disclosure, and we liked it.

I don’t know about you, but immediately my brain starts searching for a date. What year was this beginning?

No dates are given, only clues.

First clue, a reference to RFP.

So a guy named Rain Forest Puppy published the first Full Disclosure Policy promising to release vulnerabilities to vendors privately first but only so long as the vendors promised to fix things in a timely manner.

There may be earlier versions. The RFP document doesn’t have a date on it, but links suggest 2001. Lack of date seems a bit strange for a policy. I’ll settle on 2001 until another year pops up somewhere.

Second clue, vendors, meaning Microsoft

But vendors didn’t like this one bit and so Microsoft developed a policy on their own and called it Coordinated Disclosure.

This must have been after the Gates’ memo of 2002.

Both clues say the beginning was around 2000. That seems odd because software-based updates in computers trace back to 1968.

It also is odd to say the beginning was a Microsoft policy called Coordinated Disclosure. Microsoft says they released that in 2010.

Never mind 2010. Responsible disclosure was the first policy/concept at Microsoft because right after the Gates’ memo on security they mention it in 2003, discussing how Tavis Ormandy decided unilaterally to release a 0day on XP.

Thus all of the signals, as I dug through the remainder of the post, suggest vulnerability research beginning around 15 years ago. To be fair, the author gives a couple earlier references:

…a debate that has been raging in security circles for over a hundred years starting way back in the 1890s with the release of locksmithing information. An organization I was involved with, L0pht Heavy Industries, raised the debate again in the 1990’s as security researchers started finding vulnerabilities in products.

Yet these are too short a history (1890s wasn’t the first release of locksmith secrets) and not independent (L0pht takes credit for raising the debate around them) for my tastes.

Locksmith secrets are thousands of years old. Their disclosure follows. Pin-tumblers get called Egyptian locks because that’s where they are said to have originated; technically the Egyptians likely copied them out of Mesopotamia (today Iraq). Who believes Mesopotamia was unhappy their lock vulnerabilities were known? And that’s really only a tip of the iceberg for thousands of years disclosure history.

I hear L0pht taking credit again. Fair point. They raised a lot of awareness while many of us were locked in dungeons. They certainly marketed themselves well in the 1990s. No question there. Yet were they raising the debate or joining one already in progress?

To me the modern distributed systems debate raged much, much earlier. The 1968 Carterfone case, for example, ignited a whole generation seeking boundaries for “any lawful device” on public communication lines.

In 1992 Wietse Venema appeared quite adamant about the value of full disclosure, as if trying to argue it needs to happen. By 1993 he and Dan Farmer published the controversial paper “Improving the security of your site by breaking into it“.

They announced a vulnerability scanner that would be made public. It was the first of its kind. For me this was a turning point in the industry, trying to justify visibility in a formal paper and force open discussion of risk within an environment that mostly had preferred secret fixes. The public Emergency Response and Incident Advisory concepts still meant working with vendors on disclosure, which I will get to in a minute.

As a side-note the ISS founder claims to have written an earlier version of the same vulnerability scanner. Although possible, so far I have found nothing outside his own claims to back this up. SATAN has free and far wider recognition (i.e. USENIX paper) and also easily was found running in the early 1990s. I remember when ISS first announced in the mid 1990s, it appeared to be a commercial version of SATAN that did not even try to distinguish or back-date itself.

But I digress. Disclosure of vulnerabilities in 1992 felt very controversial. Those I found were very hush and the steeped ethical discussions of exposing weakness were clearly captured in Venema/Farmer paper. There definitely was still secrecy and not yet a full-disclosure climate.

Just to confirm I am not losing my memory, I ran a few searches on an old vulnerability disclosure list, the CIAC. Sure enough, right away I noticed secretive examples. January 4, 1990 Texas Instruments D3 Process Control System gives no details, only:

TI Vuln Disclosure

Also in January 1990, Apple has the same type of vulnerability notice.

Even more to the point, and speaking of SATAN, I also noticed HP using a pre-release notice. This confirms for me my memory isn’t far off; full disclosure was not a norm. HP issues a notice before the researcher made the vulnerabilities public.

HP SATAN

Vendors shifted how they respond not because a researcher released a vulnerability under pride of full disclosure, which a vendor had powerful legal and technical tools to dispute. Rather SATAN changed the economics of disclosure by making the discussion with a vendor about self-protection through awareness first-person and free.

Anyone could generate a new report, anywhere, anytime so the major vendors had to contemplate the value of responding to an overall “assessment” relative to other vendors.

Anyway, great thoughts on disclosure from the other blog, despite the difference on when and how our practices started. I am ancient in Internet years and perhaps more prone than most to dispute historic facts. Thus I encourage everyone to search early disclosures for further perspective on a “Beginning” and how things used to run.

Updates:

@ErrataRob points out SATAN was automating what CERT had already outed, the BUGTRAQ mailing list (started in 1993) was meant to crowd-source disclosures after CERT wasn’t doing it very well. Before CERT people traded vulns for a long time in secret. CERT made it harder, but it was BUGTRAQ that really shutdown trading because it was so easy to report.

@4Dgifts points out discussion of vulns on comp.unix.security USENET news started around 1984

@4Dgifts points out a December 1994 debate where the norm clearly was not full-disclosure. The author even suggests blackhats masquerade as whitehats to get early access to exploits.

All that aside, it is not my position to send out full disclosure, much as I might like to. What I sent to CERT was properly channeled through SCO’s CERT contact. CERT is a recognized and official carrier for such materials. 8LGM is, I don’t know, some former “black hat” types who are trying pretty hard to wear what looks like a “white hat” these days, but who can tell? If CERT believes in you then I assume you’ll be receiving a copy of my paper from them; if not, well, I know you’re smart enough to figure it out anyway.

[…]

Have a little patience. Let the fixed code propagate for a while. Give administrators in far off corners of the world a chance to hear about this and put up defenses. Also, let the gory details circulate via CERT for a while — just because SCO has issued fixes does not mean there aren’t other vendors whose code is still vulnerable. If you think this leaves out the freeware community, think again. The people who maintain the various login suites and other such publically available utilities should be in contact with CERT just as commercial vendors are; they should receive this information through the same relatively secure conduits. They should have a chance to examine their code and if necessary, distribute corrected binaries and/or sources before disclosure. (I realize that distributing fixed sources is very similar to disclosure, but it’s not quite the same as posting exploitation scripts).

US President Calls for Federal 30-day Breach Notice

Today the US moved closer to a federal consumer data breach notification requirement (healthcare has had a federal requirement since 2009 — see Eisenhower v Riverside for why healthcare is different from consumer).

PC World says a presentation to the Federal Trade Commission sets the stage for a Personal Data Notification & Protection Act (PDNPA).

U.S. President Barack Obama is expected to call Monday for new federal legislation requiring hacked private companies to report quickly the compromise of consumer data.

Every state in America has had a different approach to breach deadlines, typically led by California (starting in 2003 with SB1386 consumer breach notification), and more recently led by healthcare. This seems like an approach that has given the Feds time to reflect on what is working before they propose a single standard.

In 2008 California moved to a more aggressive 5-day notification requirement for healthcare breaches after a crackdown on UCLA executive management missteps in the infamous Farah Fawcett breaches (under Gov Schwarzenegger).

California this month (AB1755, effective January 2015, approved by the Governor September 2014) relaxed its healthcare breach rules from 5 to 15 days after reviewing 5 years of pushback on interpretations and fines.

For example, in April 2010, the CDPH issued a notice assessing the maximum $250,000 penalty against a hospital for failure to timely report a breach incident involving the theft of a laptop on January 11, 2010. The hospital had reported the incident to the CDPH on February 19, 2010, and notified affected patients on February 26, 2010. According to the CDPH, the hospital had “confirmed” the breach on February 1, 2010, when it completed its forensic analysis of the information on the laptop, and was therefore required to report the incident to affected patients and the CDPH no later than February 8, 2010—five (5) business days after “detecting” the breach. Thus, by reporting the incident on February 19, 2010, the hospital had failed to report the incident for eleven (11) days following the five (5) business day deadline. However, the hospital disputed the $250,000 penalty and later executed a settlement agreement with the CDPH under which it agreed to pay a total of $1,100 for failure to timely report the incident to the CDPH and affected patients. Although neither the CDPH nor the hospital commented on the settlement agreement, the CDPH reportedly acknowledged that the original $250,000 penalty was an error discovered during the appeal process, and that the correct calculation of the penalty amount should have been $100 per day multiplied by the number of days the hospital failed to report the incident to the CDPH for a total of $1,100.

It is obvious too long a timeline hurts consumers. Too short a timeline has been proven to force mistakes with covered entities rushing to conclusion then sinking time into recovering unjust fines and repairing reputation.

Another risk with too short timelines (and complaint you will hear from investigation companies) is that early-notification reduces good/secret investigations (e.g. criminals will erase tracks). This is a valid criticism, however it does not clearly outweigh benefits to victims of early notification.

First, a law-enforcement delay caveat is meant to address this concern. AB1755 allows a report to be submitted 15 days after the end of a law-enforcement imposed delay period, similar to caveats found in prior requirements to assist important investigations.

Second, we have not seen huge improvements in attribution/accuracy after extended investigation time, mostly because politics start to settle in. I am reminded of when Walmart in 2009 admitted to a 2005 breach. Apparently they used the time to prove they did not have to report credit card theft.

Third, value relative to the objective of protecting data from breach. Consider the 30-day Mandiant 2012 report for South Carolina Department of Revenue. It ultimately was unable to figure out who attacked (although they still hinted at China). It is doubtful any more time would have resolved that question. The AP has reported Mandiant charged $500K or higher and it also is doubtful many will find such high costs justified. Compare their investigation rate with the cost of improving victim protection:

Last month, officials said the Department of Revenue completed installing the new multi-password system, which cost about $12,000, and began the process of encrypting all sensitive data, a process that could take 90 days.

I submit to you that a reasonably short and focused investigation time saves money and protects consumers early. Delay for private investigation brings little benefit to those impacted. Fundamentally who attacked tends to be less important that how a breach happened; determining how takes a lot less time to investigate. As an investigator I always want to get to the who, yet I recognize this is not in the best interest of those suffering. So we see diminishing value in waiting, increased value in notification. Best to apply fast pressure and 30 days seems reasonable enough to allow investigations to reach conclusive and beneficial results.

Internationally Singapore has the shortest deadline I know of with just 48-hours. If anyone thinks keeping track of all the US state requirements has been confusing, working globally gets really interesting.

Update, Jan 13:

Brian Krebs blogs his concerns about the announcement:

Leaving aside the weighty question of federal preemption, I’d like to see a discussion here and elsewhere about a requirement which mandates that companies disclose how they got breached. Naturally, we wouldn’t expect companies to disclose publicly the specific technologies they’re using in a public breach document. Additionally, forensics firms called in to investigate aren’t always able to precisely pinpoint the cause or source of the breach.

First, federal preemption of state laws sounds worse than it probably is. Covered entities of course want more local control at first, to weigh in heavily on politicians and set the rule. Yet look at how AB1755 in California unfolded. The medical lobby tried to get the notification moved from 5 days to 60 days and ended up on 15. A Federal 30 day rule, even where preemptive, isn’t completely out of the blue.

Second, disclosure of “how” a breach happened is a separate issue. The payment industry is the most advanced in this area of regulation; they have a council that releases detailed methods privately in bulletins. The FBI also has private methods to notify entities of what to change. Even so, generic bulletins are often sufficient to be actionable. That is why I mentioned the South Carolina report earlier. Here you can see useful details are public despite their applicability:

Mandiant Breach Report on SCDR

Obama also today is expected to make a case in front of the NCCIC for better collaboration between private and government sectors (Press Release). This will be the forum for this separate issue. It reminds me of the 1980s debate about control of the Internet led by Rep Glickman and decided by President Reagan. The outcome was a new NIST and the awful CFAA. Let’s see if we can do better this time.

Letters From the Whitehouse: