What better thing to do on Sunday then read and then comment on a blog post about breaking the compliance rules of Shabbat? Bromium’s Tal Klein, a self-proclaimed Bromide, provides an amusing look at religious rule-breaking:
Enter the Shabbat industry: an entire business model dedicated to keeping devout Jews in compliance with divine policy while creatively circumventing it – in search of enablement (there’s even Shabbat toilet paper).
So when IT decides to play Moses and declare a policy or implement software that puts users in a box, the expectation should be that users will find a creative way around it (and that the bad guys will find a way in), all while ostensibly in-policy.
Tal makes a typical point; to put it simply, water flows downhill so if you try and block the water you can expect it to try and flow around.
My first issue with this line of reasoning is that it over-simplifies the compliance market. Of course it is easy to say enabling is preferred to disabling, but that’s a false dichotomy. Disabling unnecessary and unused services can be justified, for example, because it reduces harm with no expected workaround or demand for flow around. It’s like finding a leaky board in a ship everyone is sailing on. Yet, Tal seems to suggest that this would be like “pretending to know what’s actually happening” and he portrays compliance as lagging and behind:
Policy can’t be a shackle. Compliance standards are by their very nature at least a generation behind modern technology. We can’t predict what users will need in order to do their jobs tomorrow, so we shouldn’t force them to work in a whitelisted box – doing so is pretending to know what’s actually happening. No enforcement can be tough enough to stop the tide of ingenuity. Not whitelisting, not remote wipe, not MDM, not DLP, not VDI. People want to go up to their floor on Shabbat, and they don’t want to take the stairs.
Back to the water analogy, stopping the flow (assuming that is the real intent) involves knowing a lot about issues involved, just like when building a dam. There are benefits to creating a box and stopping flow — disablement. Those who assess the effectiveness of the controls can in fact know a lot about harnessing the power of creative forces, as illustrated below.
Tal says that “…no enforcement can be tough enough to stop the tide…” but obviously, just like building a dam, economic and political factors influence the success of compliance more than engineering issues. The “tough” enforcement is often a question of resolve rather than technical prowess. Moreover, regulators usually do not race towards the latest technology precisely because all those who are regulated also do not race towards it; it is a commonality of acceptance rather than the choices of a few outliers that defines progress. A compliance market definitely is not as simple as a decision whether to make policy a “shackle” or not.
My second issue extends from this first point. A perfect example of the complexities of compliance is the assessment of harm. This is not a failure of compliance “awareness” but rather a burden placed on regulators by those who are regulated. Tal’s conclusion “People want to go up to their floor on Shabbat, and they don’t want to take the stairs” is an example that begs the question “so what?” The impact of elevator versus stairs for those who can use either without any consequence to themselves or anyone else is a false and mostly meaningless dichotomy (with the exception of those who can’t take the stairs). Consider instead the recent compliance case of Chevron
Federal authorities have opened a criminal investigation of Chevron after discovering that the company detoured pollutants around monitoring equipment at its Richmond refinery for four years and burned them off into the atmosphere, in possible violation of a federal court order…
Pollution is by definition harmful, unlike a soft question of stairs versus elevator and what people want. The pollution question becomes how to use compliance to spur innovation and reduce risk of harm or known harm that can have a very lasting effect.
Caterpillar in the early 2000s provide an excellent counter-example. They created more than 500 patents on clean diesel to reduce documented illness and absence for those involved in heavy machinery operations. The reduced sick-time and increased productivity was the result of a compliance framework that altered the market enough to allow for innovation — polluting was made less economically advantageous. Earlier this year the company thus cited their innovation as a partnership with regulators; both working together improve the overall result.
“For Caterpillar, it’s fitting that we’re celebrating cutting-edge technology in California as this state is the birthplace of Caterpillar innovation” […] “Today we’re marking another transformation in the industry: engines that offer our customers superior performance at near zero emissions.”
You might even say Caterpillar was doing what Bromium is positioning itself to do — bring to market a cleaner and lower-risk user experience due to compliance requirements for the same. Keep this in mind when you read Chevron’s response to the investigation of their bypass pipe:
Federal criminal investigators are trying to determine who at Chevron was aware of the bypass pipe and whether the company used it intentionally to deceive air-pollution regulators. Chevron says its use was inadvertent…
Bruce Schneier’s recent book, Liars & Outliers, gives a long and detailed analysis of this phenomenon. People often break out of group and social boundaries, resisting conformance and compliance. Their groups tolerate this in many contexts where there is some benefit but when there is harm…compliance does sometimes actually slam the door shut. An inadvertent pollution pipe will not be rewarded as innovative under societal/regulatory standards, yet it would be under Tal’s post.
There’s a nuanced relationship rather than a simple case of regulators always being “behind” or “pretending”. Often regulators understand risk more broadly than those they have to reign in for the same reason they can predict consequences better than any individual user. Regulators often see the forest for the trees, working across many different needs, which allows them to bring innovation frameworks to those regulated. One of the reasons for this complex relationship is because regulations and compliance tend to be centred around risk management; they account for some principle of consequence/harm both for the individual and groups.
So rather than just throw our hands up on compliance because user ingenuity always wins we should continue to study methods and science of stopping a tide with the aim of finding and improving the natural balance. A measured approach to compliance can be a powerful generator of ingenuity that also is beneficial to groups of users.
Tal concludes with “The challenge, then, is to develop a security policy around enablement, not the other way around.”
We have many examples of innovation in security where enablement was built around security policy. What benefit comes if we reverse this? Why not use policy to define the need for security and generate demand for Bromium? Policies of preventing and detecting breaches, for example, were not developed around enablement but still stimulated innovation in sandboxes and segmentation.
Back to Tal’s analogy of religious Jews and compliance, and to put it in terms of the famous Israeli philosopher Martin Buber, the key is perhaps to think of regulators and users as less of an Ich-Es relationship and more an Ich-Du.
Those who work together generate balanced policy, those who work opposed have imbalanced policy. Balanced policy can shackle harm while still enabling innovation. Benefits even can come from stopping a tide, with ingenuity and positive innovation being one of them:
Beaver Lodge, Richard Orr (c) Dorling Kindersley
Hi Davi,
I was supposed to meet you at that BayLISA event but ended up having to go to the Gartner Security & Risk Management Summit instead.
Thank you for the well thought out response.
I think most of your commentary seems to agree with what I wrote: We both agree that productivity is key to ensuring there’s a happy marriage between security and productivity.
The primary disconnect, I believe, is the “happy and” part, prior to productive that we disagree on. Your perspective seems to be that those in charge of security have more visibility into the threat characteristics and thus are more qualified to assess what users “really” need. To quote a more colloquial reference than Buber, allow me to channel Agent Smith from “The Matrix”:
Did you know that the first Matrix was designed to be a perfect human world. Where none suffered. Where everyone would be happy. It was a disaster. No one would accept the program.
We can’t pretend to understand what users need in order to feel “enabled”. The stuff we want to restrict is access to our information and infrastructure, not the things that users “do”. The more we restrict activity, the more users will go around us. Until we figure out what “going around us” means, we are more vulnerable than ever. So I stand firm that information security strategists need to move the focus away from “I” in IT and toward the “U” in User.
I’d love to continue this dialog on a podcast or something. We should find a forum to chat.
-Tal
Hi Tal,
Sounds good to me. I’d be happy to do a podcast or other forum. ISACA-SF is this week and I’ll be speaking on BYOD tomorrow (my Cloud Audit presentation was today). Hope you can come and join.
Good quote from the Matrix but the first line gives away the problem: “designed to be a perfect”. Nothing designed to be perfect can escape the reality of dissent. What about the more realistic design for a better human world? Who designs that? Do we leave it to the users? What if Alice wants to do things harmful to Bob, and Bob wants to do things harmful to Charlie (e.g. pollution)?
The matrix is a critique of bad leadership (those who believe they can create a perfect human world). The film is an indictment of policy in the same way that an indictment of a bent nail becomes a critique of bad carpenters. We shouldn’t just throw away the bag of nails when a carpenter complains about one. In other words we still need policy to handle the conflict and competition among users, especially those who know what is best for themselves but don’t care about anyone else.
Not all regulators, policy-wonks and auditors have to pretended to understand what users need. Users tend to be very vocal about what they need and regulators listen. The question I have is if we shift all focus to enabling users, then what is the point of listening? Who would be in a position to take action to resolve differences (i.e. write a policy) when Bob and Alice fight? At some point one of them will have to be disabled (e.g. the usability problem like Chevron’s pollution pipe). We can’t leave it to Chevron alone or PG&E alone (e.g. San Bruno explosion)…
“Going around us” does not need to be seen as always/implicitly the good guys going around bad guys. Instead we could think of it as a member of one group going around another member of another group or their own group (e.g. passing over a double-yellow line or a dotted-white line has an impact to everyone on the road, it is not just resistance to the people who measured and painted the lines).
My point is that those in charge of security can be great leaders and can measure/anticipate risk to create rules that benefit groups. Don’t take away all authority and give it to the users just because of evidence of bad security leadership. We have an equal number of examples, or more, of bad security behavior by users. A balance between the two is best.
I’ve written a little about this for tomorrow in terms of the Cuban Missle Crisis, which was 50 years ago. It’ll post in the early morning.
Davi