Research on the VPN flaw at Google has led me to believe they do not want anyone to talk about it. This brought me to an odd conclusion. Only a few months after the giant company said the Chinese are behind an attack on their infrastructure (that arguably came through a simple backdoor/VPN) they were found suggesting almost the same strategy to Chinese citizens — that they use VPNs to evade security perimeters.
Hypocritical? I do not have the liberty to disclose all the details I have found, but hopefully someday things will become more clear. Meanwhile a story about Google’s security vulnerability disclosure propaganda from 2008 has actually become a bit more clear. Surveillance State wrote back then:
Question: You’re a multibillion dollar tech giant, and you’ve launched a new phone platform after much media fanfare. Then a security researcher finds a flaw in your product within days of its release. Worse, the vulnerability is due to the fact that you shipped old (and known to be flawed) software on the phones. What should you do? Issue an emergency update, warn users, or perhaps even issue a recall? If you’re Google, the answer is simple. Attack the researcher.
The punchline is here:
Miller, the unnamed Googlers argued, acted irresponsibly by going to The New York Times to announce his vulnerability instead of giving the Big G a few weeks or months to fix the flaw:
Google executives said they believed that Mr. Miller had violated an unwritten code between companies and researchers that is intended to give companies time to fix problems before they are publicized.
Compare that with how Google acted in 2010 when their own security researcher released a vulnerability notice to the public just five days after he reported it to the vendor, a competitor of Google. He did not go to the New York Times and post a general warning or notice. He posted extensive details to a list monitored by the people who know how to write exploits.
What did the Google executives say about this disclosure? Violation of unwritten code? Irresponsible? Apparently no.
The Google researcher defended his actions by saying time was up — attackers already knew of the exploit. However, you do not need a PhD in ethics to know that he could have given Microsoft the opportunity to respond themselves. Why did he decide it was his responsibility to disclose the vulnerability before a patch is ready? Why did he feel he would be spared from the Google reaction to security disclosure outside their walls?
Microsoft has been known to announce vulnerabilities before patches and it could be argued they have set a reasonable model for vulnerability management and disclosure in the past five years. Google, not so much.
All that being said the official Google position on this disclosure now seems to come from the Google blog about security. There you can find Google security staff who call responsible disclosure a form of “irresponsible” permission.
We’ve seen an increase in vendors invoking the principles of “responsible” disclosure to delay fixing vulnerabilities indefinitely, sometimes for years; in that timeframe, these flaws are often rediscovered and used by rogue parties using the same tools and methodologies used by ethical researchers. It can be irresponsible to permit a flaw to remain live for such an extended period of time.
This makes Google either look like they are rudderless in terms of security or they are proponents of hypocrisy.
“Innovation Fail” Photo by MadMothist
How do we reconcile their attacks on security researchers by executives and then their attacks on executives by security researchers? They have changed their position? I hope Tom Toles is watching this.
The good news is that Google is so big and so influential that this kind of floundering and headless approach to the social, economic and political aspects of security is forcing important questions for everyone. Microsoft has put forward a reasonable response already (they might have had it ready) by suggesting “Coordinated Vulnerability Disclosure”. This sounds not unlike what Google executives were opining in 2008:
Newly discovered vulnerabilities in hardware, software, and services are disclosed directly to the vendors of the affected product, to a CERT-CC or other coordinator who will report to the vendor privately, or to a private service that will likewise report to the vendor privately. The finder allows the vendor an opportunity to diagnose and offer fully tested updates, workarounds, or other corrective measures before detailed vulnerability or exploit information is shared publicly. If attacks are underway in the wild, earlier public vulnerability details disclosure can occur with both the finder and vendor working together as closely as possible to provide consistent messaging and guidance to customers to protect themselves.
Perhaps Google is not hypocritical. Perhaps they are not putting a low value on security management. They might just not be sure which foot is left and which is right and are still working out the kinks before they start walking. That is possible. My prediction is that by 2011 a Google executive memo will finally reach their security researchers, assuming systems are available, and they will co-announce with Apple a new and innovative program called coordinated disclosure of vulnerabilities. They also might extend the bounty program to UI and functionality flaws in their products (Google maps send you to the wrong place? Report and get a $1000!) and start giving responsible information in their own disclosures.
One thought on “Google and (Ir)Responsible Disclosure”