Category Archives: Security

CObIT 4

Several people have asked me what’s new and different about the latest release of the Control Objectives for Information and related Technology (CObIT4). I have not read the official release yet from the Information Systems Audit and Control Foundation and IT Governance Institute (the primary backers) but here are some of the things that have stood out so far:

The framework has some basic rewording and reorganization that is intended to be more consistent with other standards, such as ITIL (convergence is good). For example Plan and Organize 8 (PO8) “Ensure compliance with external requirements” has been completely removed and the text transfered to a new Monitor and Evaluate 4 (ME4) “Ensure regulatory compliance”, which replaces the old ME4 “Provide for independent audit” since that was considered outside the scope of IT. Deliver and Support 8 (DS8) was renamed “Manage service desk and incidents” with Deliver and Support 10 (DS10) being renamed to “Manage problems”, which means problems will be handled separately. You get the idea…

There is also a shift from five resources to four:
– People
– Information (instead of “Data”)
– Applications
– Infrastructure (to replace both “Technology” and “Facilities”)

And the overall structure has been changed to
– Control over IT processes of…
– to satisfy the business requirement of…
– is achieved by…
– is managed by…
– and is measured by…

Protecting your trail

A recent decision of the Bankruptcy Appeals Panel of the 9th Circuit (VEE VINHNEE v. AMEX: Dec 16, 2005) seems to suggest that adequate controls to protect audit logs must be in place in order to prove the authenticity of digital information.

I have heard some conclude that this leads directly towards cryptographic protections, but it seems plausable to me that proper access controls and strong identity management might also be argued to be sufficient, if not compensatory.

The testimony by AMEX employees who routinely accessed the data was non-expert, and it suggests that they could only assume controls were in place but did not know/verify. This appears to have opened up the possibility that the data could not be proven to be authentic.

The decision explores the issue of authenticity and has some interesting citations such as “George L. Paul, The “Authenticity Crisisâ€? in Real Evidence, 15 PRAC. LITIGATOR No. 6, at 45-49 (2004). It also calls out a specific “scientific” methodology to help examine the “validity of the theory underlying computers and of their general reliability”:

Professor Imwinkelried perceives electronic records as a form of scientific evidence and discerns an eleven-step foundation for computer records:
1. The business uses a computer.
2. The computer is reliable.
3. The business has developed a procedure for inserting data into the computer.
4. The procedure has built-in safeguards to ensure accuracy and identify errors.
5. The business keeps the computer in a good state of repair.
6. The witness had the computer readout certain data.
7. The witness used the proper procedures to obtain the readout.
8. The computer was in working order at the time the witness obtained the readout.
9. The witness recognizes the exhibit as the readout.
10. The witness explains how he or she recognizes the readout.
11. If the readout contains strange symbols or terms, the witness explains the meaning of the symbols or terms for the trier of fact.

The decision then suggests that step four is of particular importance, given the lack of proof that controls existed to ensure the accuracy of data:

The testimony of the records custodian at trial regardingthe computer equipment used by American Express was vague, conclusory, and, in light of the assertion that “[t]here’s no way that the computer changes numbers,� unpersuasive.

If you read the testimony yourself, you can see the issue the decision is referring to…

I couldn’t testify to exactly what – what the model is or anything like that. It’s – you know, our computer system that we’ve used for, you know, quite some time to produce the documents, to gather the information, to store the information and then, you know, produce the statements to the card members. And we – you know, it’s highly accurate. It’s based on the fees that go in. There’s no way that the computer changes numbers or so.

I can imagine a million ways to be more convincing/prepared with regard to the controls used to protect the data in question. But the real question, I guess, is whether cryptographic controls should now be considered a minimum requirement?

Controls Map

With the recent release of ISO17799:2005 and CObIT4 I guess I need to rewite my controls map (not to mention the long list of privacy laws debated in California during 2005). I really like the ISO revision, but am still catching up with CObIT. One of the challenges of helping organizations stay on top of their controls is chosing the right blend of guidance and frameworks. I’m not saying you have to use a blend, but since they are never a perfect fit and different groups have their favorites (Auditors love COSO/CObIT, Engineers go for ISO, Ex-gov bring up the NSA and NIST, etc.) I find it helps to pull it all together into a shared map. For example:

SYSTEM INTEGRITY – Controls that ensure the integrity of the environment by utilizing proactive measures to prevent and detect unauthorized changes.

  • Gateway Filtering
  • Anti-virus
  • Encryption
  • Access Controls

  • ISO.17799 (8)(3) –
    Protection against malicious software
  • ISO.17799 (8)(7) –
    Exchange of information & software
  • ISO.17799 (10)(3)
    – Cryptographic controls
  • ISO.17799 (10)(5)
    – Security of system files
  • NIST.800-14 (3)(14) – Cryptography
  • NSA IAM (9) – Virus protection
  • AB 1950 (Wiggins) – California State Personal Information Security

Security vendors and trust

RSA 2006 is coming soon and so I am being literally barraged by security vendors hawking their wares. How do we sort the chaff from the wheat?

Here’s a hint: there is nothing more annoying that someone dangling an iPod in front of my face and asking me to tell them whether I am able to comply with some regulation. “Tell us if you violate the GLBA and we’ll give you an mp3 player” is downright insulting. It baffles me that someone who is basically anonymous would even ask that question and expect to get accurate data. And putting a picture of some cute person in front of me doesn’t improve things. Appropriate response: ignore or, if pressured, present bad data and walk away.

If you represent a security company, please help stop the madness. Random drawings based on contact information alone, for popular electronics, is one thing. Overtly saying “we’ll pay you to give us dirt on your employer” without establishing any modicum of trust should be grounds for being barred from security conferences.