Category Archives: Security

Peeing on the digital fence

Will Rogers once said:

There are three kinds of men. The ones that learn by reading. The few who
learn by observation. The rest of them have to pee on the electric fence for themselves.

Interesting to note that he suggests learning only comes from input. I suspect output also can teach. We certainly learn from doing…

His last category fits Ranum’s notion about security and user education, although Ranum might have sounded more like “users have to pee on the digital fence…”, which of course would be electrified.

Preparing for another presentation on Nigerian Scams

As I prepare for my upcoming presentation “False Harmony: Racial, Ethnic, and Religious Stereotypes on the Internet”, together with Dr. Harriet Ottenheimer, I am looking forward to discussing some remarkable new methods used by the Nigerian Scam artists since we started this project over four years ago.

We have noticed important changes since we began recording and dissecting the language of scam/fraud email messages. Actually, it is hard to believe so many years of research have already gone by and that we have already presented two papers on this topic (at ethno and anthropology conferences). I guess, time flies when you’re fighting fraud. Well, more detail will be given at this upcoming presentation. In particular, I hope to highlight change and discuss how offensive/defensive measures are able to feed off one another — adaptive tactics, if you will.

In related news, someone posted a BBC video report of a Nigerian EFCC (Economic and Financial Crime Commission) and armed police takedown of a 419 club. Good to see others working on documenting and providing analysis of the issue. By the way, I couldn’t help but note minute 1:24 when one of the EFCC appears to violently hit a suspect in the back.

As I think about it I am tempted to categorize this post as “history” since the Nigerian fee fraud scam is now probably so well known that people and the media are becoming quite attuned to these particular risks. Nonetheless, the problem persists.

Come see us present our latest findings at the international ethnic studies conference in Turkey this November, if you’re interested.

Elementary school switches to biometrics

All in good fun, of course, under the noble cause of saving time at the lunch line, according to the Associated Press.

Two things are going on here, it seems to me. First, either the school administration is overly concerned with the efficiency of lunch lines or they are obscuring more significant justifications such as trying to cut down on lunch “fraud”. Second, kids are apparently consenting to exchange some form of biometric data without being informed of the true trade-off and future consequences and without parental consent:

Rome City Schools is switching to a scanning system that lets students use their fingerprints to access their accounts. In the past, students had to punch in their pin numbers.

“The finger’s better because all you’ve got to do is put your finger in, and you don’t have to do the number and get mixed up,” said Adrianna Harris, a second grader at Anna K. Davie Elementary School.

The system “lets” them use their fingers. Hard not to jump to conclusions about an administration trying to entice kids with a particular view of privacy and “good-for-you” security at a vulnerable age. “Do you want to eat? Just give me your finger…” At least one parent is notably concerned:

“It may be perfectly secure, but my daughter is a minor and I understand that supposedly the kids have the option to not have their prints scanned, but that’s not being articulated to my daughter,” said Hal Storey, who’s daughter is a 10th grader at Rome High.

Minors are allowed to decide so very few things for themselves when it comes to privacy and identity and yet this system relies on them to decide whether they want to give away some form of their biometric information. Even if you make the argument that drivers licenses capture the same information later in life you have to admit that it differs in at least two ways: 1) adult consent 2) exchange for transportation/mobility

When you are seven years old less time in the lunch line might seem worth it. But what will you think when you become a teenager (adulthood in some cultures) or later on, long after your teenage years? Will you look back and say “I sure am glad my fingerprint was stored by the school” or will you say “I wish I had known more about information security before I agreed to give my fingerprint data to the school and they were breached”. To be fair, the company paid to install the system points out that it does not intend to store a full fingerprint but instead record a digest made from a few spots expected to be unique:

The computer converts the fingerprint into an algorithm and scans six to eight unique points of the print, said Shawn Tucker, the technical support manager of Comalex, which is the company supplying Rome’s new system.

The data stored in the system is not an image of the child’s fingerprint like something you would find in an FBI database, he said. It is a list of points that together distinguish the child’s finger from that of other students.

No, not something you would find in an FBI database…yet. Of course, if the system is truly recording a unique identity for all the students it really doesn’t matter how it goes about it since the FBI (or anyone else for that matter) would just need a copy of the database and then they have access to unique biometric data as good as fingerprints, right? This is one of those “it’s highly accurate when used for good but it’s not really accurate when used for bad” arguments you have to watch out for from biometric companies.

I’m not saying I am opposed to the plan, but based on this story it does not sound like the privacy rights of the children or their parents are being well valued or properly discussed by those who will be most impacted. Perhaps the idea was conceived by a fan of the TSA plan for speedier/preferential treatment of certain passengers. While that system is flawed for a number of other reasons, in comparison to this plan the idea of loss of privacy in exchange for mobility is a far cry from loss of privacy in exchange for a little more time at lunch, no? I’d like to see the school publish the trade-offs they considered, especially since they said this system was to benefit the students…

Another parent said, in the Rome News-Tribune, his biggest issue was the lack of transparency and communication prior to the decision to take his child’s biometric data:

If he had been notified and informed about the technology before it was put in place, Storey said, he might have been fine with the new system.

“At this moment my plan is to instruct them because they don’t have parental permission, to remove my daughter’s scan and have alternative means,� he said.

This gives “there’s no free lunch” a whole new meaning.

Núñez (wireless) Network Security Amendment

Some comments on Scheier’s blog suggest that the new amendment to the existing California Consumer Protection Against Computer Spyware Act (SB 1436) might actually be the work of the RIAA. Anyone know who lobbied for this bill?

The Register story, which Schneier cites, does in fact mention an infamous case where a woman defended herself by claiming a lack of security:

Tammie Marson was accused by record labels Virgin, Sony BMG, Arista, Universal and Warner Brothers of illegally sharing copyrighted music files. She argued that because anyone in the vicinity of her house could have used her connection, the record labels could not rely on the fact that her connection was used, but would have to prove that she was the one actually performing the actions.

Marson’s lawyer, Seyamack Kouretchian of Coast Law Group, told OUT-LAW Radio that evidence that Marson’s connection was used was not enough. “The best that they could do, the absolute best, was prove that the music was on a computer that had accessed the internet through her internet connection,” he said. “You had neighbours who would have had access to her internet connection over a wireless router so it could have been anybody.”

However, a little reading of the text of the amendment itself suggests that it was not a reaction to the Marson case. First of all, it was introduced by Núñez (Los Angeles) and co-authored by Leno (San Francisco). They don’t seem like the type to be in the pocket of the RIAA, but anything is possible and I have not yet looked into it. Second, their intro language in the final version complains more about users who are unaware of the option of security, rather than a need to require them to use security. Could a RIAA lawyer argue that you agreed to secure your wifi when you opened the packaging? It is not clear what the warnings will say.

Earlier revisions of the amendment shed some more light on what Núñez was trying to do. They also show how far it came along. I can only imagine the reaction if he had kept lines like this one:

(b) “Encryption” means any process whereby a wireless connection to a wireless local area network (WLAN) is secured and is accessible only by the user of the wireless technology.

Encryption means secured. Clear? Note that the first draft also had a rather vague requirement:

A person or entity that sells wireless technology to a computer user in this state shall not sell that technology unless it contains encryption software or a similar encryption device, which shall be set as the default mode at the time of sale.

Eeek. That’s like fingernails on the chalkboard bad. Encryption software or similar encryption device? Could someone define encryption, or at least throw in a “reasonable” in front of it for good measure. Er, imagine if you used encryption=secured — “secured software or a similar secured device”?

Anyway, not to beat a dead draft version, the final version has its own problems. For example:

Enabled security avoids this problem by preventing all but the most determined attempts to tap into a consumer’s network.

Great. Enabled security sounds like a good thing. I’m a little wary of who gets to define “determined attempts” and how, but I’ll leave that one alone for now. So, what’s the problem it is trying to solve?

Consumers are generally unaware when an unauthorized user is using their broadband network connection

Ahem. Who gets to be the person to tell the California Senate that their solution has nothing to do with solving the problem? Neither warning labels nor encryption make users aware of unauthorized use of a wifi network. Wasn’t that the goal? Sure, there is a small chance that users might be able to prevent unauthorized use if they know what to do, but if the problem is that they are unaware or otherwise unable to detect unauthorized use…I’m just saying.

So despite all the problems brought forward for consideration the solution they ultimately settled on seems to suggest little more than user education. I think it is interesting that in the final version there is no requirement for default enable on devices, just a gentle prod to be aware that security exists. At least that is how I would interpret this “advise and make them affirm” language:

(3) Provide other protection on the device that does all of the following :
(A) Advises the consumer that his or her wireless network connection may be accessible by an unauthorized user.
(B) Advises the consumer how to protect his or her wireless network connection from unauthorized access.
(C) Requires an affirmative action by the consumer prior to allowing use of the product.

I guess I am ok with that. Advising the consumer about their option for security is just plain old education (POE), although I am not convinced that this is the right way to give and incentive to companies to offer better security or make it more user friendly. Ranum and Schneier gave their positions on the effectiveness of user education here. In a nutshell, Ranum says hard-knocks and breaches are the form of education people can relate to and Schneier contends that security should be easy enough to use that people will adopt it naturally. But rather than rehash that debate the government of CA has sort of clearly said they want users to be educated.

So when you look at the amendment’s solution, the real question becomes whether people are being denied the opportunity to protect their wifi (and related) security because they simply do not know about their security options. That is what the amendment appears to cover. Is this really something the government can effectively promote, especially if consumers actually want/need controls from manufacturers like real-time monitoring instead of just some legal disclaimers on a piece of packing tape?

I don’t know if there is still time for revision but I would suggest they try to find a way to incent wifi device manufacturers to make security more reliable and accessible, and that does not necessarily mean direct regulations. A mere warning about the option to use a complex and faulty system (to combine the positions of Ranum and Schneier) does not generate the heat necessary to make security seem like a good trade-off to the average consumer.