Invincea has posted a live demo of malware targeting users who are interested in the “birther” controversy in America.
Category Archives: Security
Queensland Qops Arrest Security Reporter
A reporter attended AusCERT and then wrote a story about a privacy presentation based on a stunt. The details of the stunt are not very interesting. In brief, photos set to private on Facebook were found exposed on publicly accessible systems, which is like saying cheese was made from milk today. What is actually interesting is that the reporter who wrote the report was then arrested by authorities.
I can’t help but wonder what made the police so interested in this case. The journalist was said to have received a copy of the presentation materials.
…arrested by Queensland Police yesterday and threatened with charges relating to the receipt of “tainted material”.
Tainted material? That piqued my curiosity.
A little digging around uncovered a 2002 reference called “Protection of privacy under the general law following ABC v Lenah Game Meats Pty Ltd” at the Australasian Legal Information Institute.
The facts in Lenah raised a novel issue under Australian law: namely, the extent to which a media organisation can be restrained from publishing material obtained as a result of a trespass, in circumstances in which the organisation is not itself implicated in the trespass.
Yes, indeed, a novel idea ten years ago. So what’s the answer?
Lenah argued in the case that reporters who receive information, which had been removed by a 3rd party without authorisation, should have to treat it as confidential.
…upon analogy with the action for breach of confidence, it was claimed that information obtained as a result of a trespass should be treated as equivalent to confidential information, meaning that it is possible to restrain disclosure by a third party that knows the information has been acquired unlawfully
That seems basically the same as the reported situation at AusCERT and apparently in 2002 the court did not agree with Lenah’s argument:
…the equitable doctrine of ‘unconscionability’ was not an independent equitable basis for awarding an injunction
I’m not a lawyer but I’m pretty sure that says there has to be another trigger mechanism to compell police to jump into action and prevent the disclosure of information. This sentence makes it more clear:
Drawing on the US tort of unreasonable intrusion into seclusion, Gleeson CJ proposed that information or conduct should be regarded as private if disclosure ‘would be highly offensive to a reasonable person of ordinary sensibilities’
Australian common law thus seems to only protect privacy as incidental to other protection from defamation and other recognised forms of harm.
The journalist at AusCERT would not be compelled to treat the information as confidential even if he knew that it had been obtained without authorisation. He would be at risk of violating privacy only if disclosure of the information could cause harm. So was that the assumption of the police? Was that a tip they received — there was taint?
Unfortunately, no, according to a statement they made later. Their rationale was far less eloquent or compelling and perhaps not even in accord with the law:
Receiving a photograph obtained from a Facebook account without the user’s permission is the same as receiving a stolen TV, Queensland Police have said after the arrest of a Fairfax journalist.
The head of the Queensland police fraud squad, Brian Hay, admitted this morning that police were “still cutting our teeth” in the rapidly evolving online environment and named cyber crime as the biggest law-enforcement challenge.
If they are going to arrest everyone that has photographs taken from Facebook without user permission, and treat them as stolen goods…well, I can see how they might find cyber crime challenging if that’s their position.
Good luck to them on investigating all those “stolen TVs” on the Australian Brocial Network. That must be taking a lot of their time lately, I mean arresting all the Bros.
GFCI for Google Toast
I’ve probably beaten the toaster metaphor to death by now, but just in case it’s still alive here’s one more note on it.
My presentations often use a slide on the fallacy of treating commodity alone as a measure of safety and reliability. I argue that the toaster was unsafe for 54 years, until the invention of the ground fault circuit interrupter (GFCI). This was a big theme in my VMworld presentation in 2010, based on my ISACA presentation from 2009.
I received rave reviews on this metaphor and was told it helped frame the toilet technology timeline (T3) presented by the Cloud Security Alliance at RSA San Francisco a few months later.
I am not sure inefficient, unsanitary and unhealthy waste management concepts made for the best security metaphor but it definitely is more amusing than toasters. Whether you have a commodity metaphor or a commode metaphor it does not change the fundamental issue of progress in customer safety. How do you trust a provider? Would you wear this?

It is not safe to assume controls for a commodity will be included within a commodity design itself, or even within the architecture presented by a manufacturer or service provider.
Cloud providers may not have security as their goal any more than toaster manufacturers aim to prevent you from being electrocuted (they can do it). Many important security features are likely to be an add-on or an external improvement; a result of external factors like regulation.
Tokens and encryption at the client are complicated but good examples of this effect. You can replace all the data in the cloud with tokens or data encrypted by a key to which the cloud provider never has access.
Another example of this just popped up in CSO. A company figured out a way to make Google applications safer — use a third-party security mechanism. Here’s their GFCI for Google toast.
I started looking around the at third-party apps, some of which were administrative tools, to see if there was there anything that could help me with the visibility component. I found CloudLock. Their tool gives me the ability to retrospectively know if something has been shared with the public, to an individual outside my domain, or within my own agency. We are using all three levels of sharing appropriately. They key to being able to use Google Docs is having the visibility on it.
The lesson is don’t take a bath in cloud applications unless you can detect and prevent a failure, and don’t assume safety controls are built into the apps or even offered by cloud providers. An external/add-on control can save your SaaS.
NIST SP 800-146 DRAFT Cloud Computing Synopsis and Recommendations
News from NIST. Comments are requested for a “plain terms” document for “decision makers”; my first comment is that more technical guidance would be more appropriate. A “how to” is what everyone is asking for, not a “should do”:
The cloud computing research team at the National Institute of Standards and Technology (NIST) is requesting public comments on a draft of its most complete guide to cloud computing to date.
Draft Special Publication 800-146, NIST Cloud Computing Synopsis and Recommendations explains cloud computing technology in plain terms and provides practical information for information technology decision makers interested in moving into the cloud. Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources-for example networks, servers, storage, applications and services-that can be rapidly provisioned and released with minimal management effort or service provider interaction.
Comments for this draft should be sent to 800-146comments@nist.gov by June 13, 2011.
To view the press release from NIST’s Public Business and Affairs office regarding this draft, please go to this NIST page:
http://www.nist.gov/itl/csd/20110512_cloud_guide.cfm
Here’s a sample directly relevant to my post on the Dropbox encryption and key management controversy now documented in a complaint to the FTC.
8.5.7 Key Management
Proper protection of subscriber cryptographic keys would appear to require some cooperation from cloud providers. The issue is that unlike dedicated hardware, zeroing a memory buffer may not delete a key if: (1) the memory is backed by a hypervisor that makes it persistent, (2) the VM is having a snapshot taken for recovery purposes, or (3) the VM is being serialized for migration to different hardware. It is an open issue on how to use cryptography safely from inside a cloud.
It seems to me that final statement is out of place and too concessionary and, if accepted, pretty much kills the FTC complaint.
And here’s a sample of the “should do” theme in the “General Recommendations:
…protective mechanisms should be required by subscribers for separating sensitive and nonsensitive data at the provider’s site.
If I wanted that high-level of a guide I could use an existing standard.
