Category Archives: Security

Betfair’s Gamble on Disclosure

Nearly four million records were stolen last year, apparently even encryption keys, from the fast-growing gambling company. The Telegraph reports they were forced to report it to law enforcement, partners and regulators

The theft was so serious that Betfair was forced to inform the UK’s Serious Organised Crime Agency (SOCA), the Australian Federal Police and German law enforcement officials. It also notified the UK Gambling Commission and the Maltese Lotteries and Gaming Authority, as well as Royal Bank of Scotland, its “acquiring bank” – the lender responsible for accepting credit and debit card payments made via Betfair.

They did not, however, report it to the owners of the records who would be impacted

Its July report to regulators states it had decided there was no reason to inform its customers, after taking advice from SOCA that “public disclosure would be detrimental to any intelligence operation or investigation”.

The argument for not disclosing the breach to customers supposedly hinged on a little detail about whether sensitive track data was exposed.

“We have taken the prudent view that the criminal has the expertise to decrypt the payment card details,” Betfair admitted, though stressed that the “CVV2/CVC security numbers” were not stolen.

It said advice from RBS was that “this very significantly limits the ability of the cards to be used fraudulently”.

That’s nonsense, of course. If it were so hard to use data fraudulently then why was it encrypted in the first place? The PCI DSS wouldn’t be so strict about encryption and clean destruction of it if the RBS argument about “significantly limits” were true. We are talking about RBS, another company infamous for weak security, I have to point out.

CVV2/CVC were not present because it is strictly prohibited from being stored, but it’s not like the card brands say go ahead and let the rest of the data float around. More to the point, criminals make fraudulent use of cards all the time without the CVV2/CVC.

It’s a story to make many people upset, surely, but here’s a little humorous twist in the details. They only discovered the breach when a server that should have been used for monitoring for breaches crashed two months after the breach started.

The first Betfair knew of the theft was when a “production log server” crashed in its Malta data centre on May 20 – more than two months after the initial breach. That led to the discovery that “at least nine servers [had] been compromised in the UK and two in Malta”.

Hey, someone check the log server. It stopped responding. Oh, well look at that, the logs say we have been breached for a while.

That might scare some executives into proceeding with caution, but Betfair not only took a gamble by not disclosing the breach to customers, they then took an even bigger gamble — going public while faced with serious operational deficiencies.

Just a month before the decision to press ahead with the float, Betfair had received a “Forensic Investigation Report” on the cyber theft from security consultancy Information Risk Management (IRM).

Its first conclusion was that: “Appropriate information security governance is not in place within Betfair and as a consequence the business has been exposed to significant risks.”

Another one? That “appropriate technical controls relating to such elements as network segregation and file integrity monitoring that would provide Betfair the ability to deter, prevent and detect such an incident are not in place”.

Now we can watch and see how the gambles work out for them.

How (not) to Fail an Audit

As I have written here several times before, like in a post on accepting mistakes to reduce their frequency, I am a big fan of the phrase “fail faster”.

Many years ago when I was Director of IT and Security at a very large enterprise I was fond of saying “fail faster” to my staff. I wanted them to feel comfortable with the idea that they should focus on always improving. The CIO was not fond of this and constantly asked me why a Director of Security, of all people, would encourage failure?

I could give a hundred examples (sports, martial arts, arts, etc.) where a perfect score is not only unlikely but self-defeating. This was familiar to some, but others still tried to prove to me that “only first place matters” and failures always should be downplayed or obscured. My fear was that their behavior was a slippery slope to fraud. Their concern was that my behavior was demotivating.

Today a colleague read a post called “Fail a Security Audit Already — it’s Good for You” and asked me if this means QSAs are too soft on their clients. The author gives this analysis:

If the audit is a stress-test of your environment that helps you find the weaknesses before a real attack, you should be failing audit every now and then. After all, if you’re not failing any audits there are two possible explanations:

1) You have perfect security.

2) You’re not trying hard enough.

I disagree and will try to explain why it’s different in this case. The author clearly is not speaking from the auditor perspective. You don’t want to tell companies to fail a PCI DSS audit. It’s a subtractive system. A company does not get a pass until they have removed all areas of remediation or compensation and can prove that things are running smoothly on an ongoing basis. The following paragraph has a strange depiction of the audit process.

Companies should be failing audits, whether internal or external, far more often than they suffer breaches. The fact that few companies are failing any audits should be cause for concern, not celebration.

How exactly has the author concluded the “fact” that few companies are failing audits? As a long-time auditor I find companies trying to pass audits far more often than they are being breached. I would call this reviewing test results and remediation in order to pass an audit.

And what celebration is the author talking about? When an auditor leaves a passing score there typically is a sigh of relief, not celebration. I am tempted to suggest this to a restaurant. Next time the health inspector gives them a passing score I will ask them to serve free cake and champagne. Probably won’t fly. I suspect there is no evidence of celebration.

The motto of fail faster works for rapid development for improvement but “trying” to fail an audit or an exam is bad advice to give a company. It’s like saying your tachometer isn’t trying hard enough if it doesn’t fail every once and a while to tell you the correct RPM. Or that you aren’t a good driver if you aren’t trying to fail your license test. Imagine if auditors tried to fail their certification test to prove that they were really trying hard to understand the regulations.

The decision of when to try and fail is nuanced. It can be confusing, which goes back to the reason the CIO cautioned me about motivation and interpretation. There are some things you want to fail and measure frequently (e.g. practice runs, tests) and things you don’t want to fail (e.g. final exams). The CSO article does not make this important distinction, and does not mention that you should consider the consequences of failure, when it tells you to fail. When we limit our definition of an audit to something like a formal audit (the final Report on Compliance to the Payment Card Industry Security Standards Council) then it is not good advice to try and fail. You should try to pass, by failing faster.

Ex-Vormetric Execs Start High Cloud

Bill Hackenberger (VP of Engineering at Vormetric) and Steve Pate (CTO at Vormetric) quit the company in 2009 and have now started…an encryption company. Steve Pate also claims to have been a founder of HyTrust, which could explain why they have named their new company High Cloud.

They are offering “early access to a Beta version of our solution” (early Beta = Alpha?) so they are far from ready for prime-time, but they appear to be in the right mindset and offer a variation of proxy architecture, similar to HyTrust. Here is a diagram presented by the CTO in 2008 that has a dedicated/physical key management server.

They list the capabilities that auditors have been asking for from cloud providers for years…the following functionality, for example, maps to some of the old text of PCI DSS compliance requirements.

  • Selected elements of the VM are encrypted.
  • VMs are encrypted in storage, in transit, and in backups.
  • VMs are protected in the data center, outside when run on a remote server, or in the Cloud.
  • Keys are not visible to anyone.
  • Separation of duties guarantees that no single person can cause catastrophic damage.
  • Key rotation to satisfy regulatory bodies is performed automatically without the need to shut down the VM.

Although I have to say, the line “keys are not visible to anyone” is poorly written and suggests vaporware. I would have expected better given how long the founders have been in the industry and the text provided by regulatory bodies. Here are the PCI DSS Requirement 3.5 testing procedures, for reference.

  • 3.5.1 Examine user access lists to verify that access to keys is restricted to the fewest number of custodians necessary
  • 3.5.2.a Examine system configuration files to verify that keys are stored in encrypted format and that key-encrypting keys are stored separately from data-encrypting keys.
  • 3.5.2.b Identify key storage locations to verify that keys are stored in the fewest possible locations and forms.

The regulations will specify need-to-know, not invisible to anyone. I also noted a mistake in reference to the ISO requirements. It’s still early so maybe these issues will be worked out by the time they have a non-early Beta available.

Breaches Down for Third Year

A quick look at the all time datalossdb.org chart of breaches tells you something is up with the data…or down.

The past several conferences I have presented at I explain why the breaches are down but attacks of a certain type on a certain industry are up. But maybe I should start a series called ZOMG BREACHES DOWN 40% FROM 2008, given today’s bone-rattling story from the Washington Business Journal called “Computer security incidents reported by federal agencies increase 650%”

Federal agencies reported more than 40,000 security incidents that placed sensitive information at risk during 2010 — a 650 percent increase compared to five years ago, according to a new report from the Government Accountability Office.

First of all, I think it’s fantastic that more incident reporting is happening and the GAO is on top of reporting progress to the public. But that doesn’t mean a reporter should just throw that number out unwashed and imply the incidents “placed sensitive information at risk”.

Such an implication will confuse readers including me because…second of all, their very next paragraph says incidents are a very, very broad area of concern way beyond just risk of disclosure.

…”security incidents” don’t always equate to an all-out breach. (According to US-CERT, they include successful and failed attempts to gain unauthorized access to a system or its data, unwanted disruption, unauthorized use of a system for the processing or storage of data, and changes to system hardware, firmware, or software characteristics without the owner’s knowledge.)

The big story is that the GAO is seeing the kind of curve in data that the datalossdb project saw right after 2004, the year following the California Breach Notification Law SB 1386. I could talk all day on what we have learned since then about breaches and reporting incidents since 2003. But let’s just say I am disgruntled to see in 2011 a reporter would toss out a headline grenade of 650% increase in incidents while ignoring that overall breaches (not incidents reported, breaches) are in decline.

Here’s a classic quote

The four most prevalent types of security incidents reported to US-CERT during fiscal 2010 include the detection of malicious code, improper usage and unauthorized access, and detected anomolies that warrant further review.

I see that as three types of security incidents and an additional category of stuff not yet figured out. Imagine if the headline was instead reporting a 650% increase in stuff not yet figured out.

Update: I should have also mentioned my earlier post that California has taken a big step forward again with SB 24 and the push for a centralized breach data repository. This issue just came up again at the federal level and the emphasis is clearly on better oversight.

If you can read past the unsubstantiated barking by fearful politicians about “precedent in history for such a massive and sustained intelligence effort” (you obviously don’t have to know history to get elected) there are some actual good nuggets like this advice from RSA

Asked for suggestions on improving U.S. cybersecurity, [Art Coviello, executive chairman of RSA Security] called on Congress to pass a national data breach notification law, and he called on the U.S. government to share more information about cyberattacks with private companies. A quicker method of sharing information between the government and businesses is needed, he said, because in a large majority of successful cyberattacks, businesses don’t know they were breached until the U.S. Federal Bureau of Investigation or some other third party tells them.

A national breach notification law would help reduce much of the confusion about attack source and consequences; perhaps it would even allow us to better settle the debate over what constitutes a “sophisticated” attack. Speaking of RSA, see you all next week at the conference where I’ll discuss many of the above issues.