At the start of this year there was a report from IDG, which said that Google maintains a system to access data about users. Much ado was made about the outside hackers somehow getting inside and taking control of a system meant only to be used in the rare cases of law enforcement search warrants:
…they apparently were able to access a system used to help Google comply with search warrants by providing data on Google users, said a source familiar with the situation, who spoke on condition of anonymity because he was not authorized to speak with the press. “Right before Christmas, it was, ‘Holy s***, this malware is accessing the internal intercept [systems],'” he said.
The high-profile decision then was made to leave China. Um, really? Malware spreading internally is all about evildoers in China? It seemed political at best.
Now a story has emerged that at least one Google employee was abusing internal access to customer data. Less than a year after the China incident it looks to me that a serious Google security failure based on a weak internal trust model was reported first by victims.
This is not another cyberwarrior in China, mind you, but a Google employee in the company’s Kirkland, Washington office.
Will Google have to make the tough decision to withdraw from the Kirkland area, or will they go all the way and pull Internet services from the state of Washington? I jest. They are probably looking at the entire Pacific Northwest. Seriously, though, the Gawker exclusive story says Google the abuse went for months before parents had to get involved and wake up the sleeping giant of cloud services.
It’s unclear how widespread Barksdale’s abuses were, but in at least four cases, Barksdale spied on minors’ Google accounts without their consent, according to a source close to the incidents. In an incident this spring involving a 15-year-old boy who he’d befriended, Barksdale tapped into call logs from Google Voice, Google’s Internet phone service, after the boy refused to tell him the name of his new girlfriend, according to our source. After accessing the kid’s account to retrieve her name and phone number, Barksdale then taunted the boy and threatened to call her.
In other cases involving teens of both sexes, Barksdale exhibited a similar pattern of aggressively violating others’ privacy, according to our source. He accessed contact lists and chat transcripts, and in one case quoted from an IM that he’d looked up behind the person’s back. (He later apologized to one for retrieving the information without her knowledge.) In another incident, Barksdale unblocked himself from a Gtalk buddy list even though the teen in question had taken steps to cut communications with the Google engineer.
Gawker explains that the attacker, evil insider if you will, was able to do all this because Google appointed Barksdale to a group called Site Reliability Engineers (SRE).
Tame title. They do not get called superuser, administrator, wheel, root or any of the usual designations of power. This employee was just an engineer given complete and open access to user data…to ensure “reliability”. It is like giving a plumber the key to your house, car and safe deposit box just in case you have a plugged sink.
Good security management practices would say there are three parts to protecting customer data: confidentiality, integrity and availability or CIA.
Decoupling confidentiality from this triad is not a safe move for customers but it is easy to see how such a thing might come to pass — it has the least visible loss associated for a financial department. Giving the keys to your plumber to make sure he is unplugging your sink at a moment’s notice does not necessarily mean he is reading your diary, but how will you know when your diary is read. And what will the plumber will do with your private information?
Executives almost always know that when the servers are down; money is lost. Availability has an associated high demand and also is highly visible. Similarly, users almost always know when integrity is broken (data is wrong or missing). Like availability, integrity has demand and is visible. “Hey, the plumber scribbled on a page in my diary!”
That leaves confidentiality. Who can say whether confidentiality is preserved even if it is in high demand? “Hello, this is your plumber. Pay me double or I tell your wife about…”
With that in mind, consider the following explanation by Gawker of how Google manages user data:
Barksdale’s intrustion into Gmail and Gtalk accounts may have escaped notice, since SREs are responsible for troubleshooting issues on a constant basis, which means they access Google’s servers remotely many times a day, often at odd hours. “I was looking at that stuff [information stored on Google’s servers] every hour I was awake,” says the former Google employee. And the company does not closely monitor SREs to detect improper access to customers’ accounts because SREs are generally considered highly-experienced engineers who can be trusted, the former Google staffer said.
“There’s a whole bunch of trust involved. There’s a lot of data inside Google, and I’m willing to bet some of it is really valuable. But for me and the people I worked with, it was never worth looking at.”
The Google employee with access to all the data said they get “a whole bunch of trust”. Someone pinch me.
- They get trust that availability will happen. Easy to manage. Users monitor and report on that every minute of every day.
- They get trust that the data will be the right data. Again easy to manage. Users will verify that as soon as they have access to their data.
- Trust get that data will remain confidential? Anyone? Anyone?…. Security compliance and audit are the answer here, running independently of engineers.
Given the above situation, Google has released a formal explanation to Gawker.
We carefully control the number of employees who have access to our systems, and we regularly upgrade our security controls–for example, we are significantly increasing the amount of time we spend auditing our logs to ensure those controls are effective. That said, a limited number of people will always need to access these systems if we are to operate them properly–which is why we take any breach so seriously
I detect a tone of defense, or self-justification. Note the caveat: “That said”
It is not normal to have a wide-open caveat like “always need to access” tossed in with a control statement. This is not the voice of security management or compliance. Controls to protect confidentiality are not meant to fail on first blush.
Someone should be suspicious when a plumber says “ZOMG the Interpipes are plugged! I need access! Where’s your bedroom?”
Some have said to me confidentiality doesn’t matter if there is no availability. That is a fair argument, unless you weigh out all risks by severity and likelihood. Moreover, confidentiality should not prevent availability if engineered properly. Even if a plumber is considered someone who “will always need access to these systems if we are to operate them properly” that does not mean a plumber has to be left alone, or work in the dark, or be trusted. The cost associated with confidentiality may actually be worth it.
Google management, if it had a chief information security officer, could have said instead that while a limited number of people need to access systems they will always have oversight and control. They also could have emphasized transparency around their security management practices. This is the message customers probably expect or will soon need. Google does itself no favors in terms of trust by ending all statements with the urgency of availability alone or by saying “This was not an assault on cloud computing. It was an attack on the technology infrastructure…“.
A (cloud) provider like Google has to belly up to the confidentiality bar at some point and start serving a proper three-course security meal (C, I and A) or customers will pull away in fear. The cloud will be seen as lacking control so they will turn to providers able to demonstrate trust. The trust will be demonstrated through compliance with, and audits for, privacy and security standards. If Google thinks it is a pain to prove and monitor confidentiality of data now, just wait until they have lost trust and have to win it back.
Facebook reported similar failures for user data earlier this year. I do not believe this is a problem exclusive to Google, although Facebook may not be the best example of this point. I have read many reports, some even from Facebook staff, that Facebook is very successful at wooing Google engineers to jump ship.
In conclusion, the inside attack is a serious issue that needs to be addressed properly, with known standards of security, even in the largest and most successful cloud providers. This incident both raises the bar while blocking the ability of a provider to blame outsiders — the Chinese, or Microsoft software, or bad scientists, or bad journalists.
As the old Russian saying goes: “Trust, but verify”
One thought on “Google and the Evil Insider”