Category Archives: Security

ATM Lie-detector in Russia

I love news stories like the one in the NYT called “A Russian A.T.M. With an Ear for the Truth

The machine scans a passport, records fingerprints and takes a three-dimensional scan for facial recognition. And it uses voice-analysis software to help assess whether the person is truthfully answering questions that include “Are you employed?” and “At this moment, do you have any other outstanding loans?”

Only an ear for truth? Now if they could add eyes to tell if a person is talking or just playing back a recording. How random are the questions? Would they prevent someone from using a replay of a stored voice signature?

Sberbank says that to comply with the part of the privacy law that would prohibit a company from keeping a database of customers’ voice signatures, the bank plans to store customers’ voice prints on chips contained in their credit cards.

Stored how, and for how long, and how do you update it?

And how would this work with someone who is mute?

Another interesting case would be for a relative or other accomplice to answer the voice tests on behalf of the applicant. Can the system detect a woman’s voice for a male applicant, an old voice for a young applicant…?

Perhaps the most startling aspect to the story is how the company working on the technology does not understand the privacy implications.

“We are not violating a client’s privacy,” [Mr. Orlovsky, the Sberbank executive] said. “We are not climbing into the client’s brain. We aren’t invading their personal lives. We are just trying to find out if they are telling the truth. I don’t see any reason to be alarmed.”

Privacy violations do not require “climbing into the client’s brain” and they do not require “invading their personal lives”. Those are bogus tests. They involve collecting personal information (i.e. a voice signature) and failing to protect it from unauthorized disclosure.

The Schwartz is with RSA

Eddie Schwartz, CSO at a part of RSA (NetWitness), will take on the title of CSO at RSA. This confirms both that NetWitness was involved in the response to the recent RSA breach and that Mel Brooks is a comic genius.

The large and looming issues ahead for Schwartz do not appear to be related to an advanced or a persistent threat (APT), although that is obviously a good topic to drum up sales of security products.

Instead he will have to address the usual, routine and mundane security problems revealed by RSA’s breach blog entry:

  • Role Based Access Controls (RBAC): whether and where low-authority and therefore less-secure systems and users have access to high-value assets
  • Egress Filtering: why outbound file transfers are allowed to unknown or known hostile addresses (e.g. application-level inspection of traffic for RAT in reverse-connect mode)
  • Application sandboxing: why binaries (i.e. flash) are not stripped from Excel using Microsoft Office Isolated Conversion Environment (MOICE) or similar
  • Awareness: if “certain groups” are targeted from the outside, then surely they can be even more easily targeted on the inside for training…like why they shouldn’t execute large email attachments in their spam folder

Zero-day exploits alone do not consitute advanced attacks, not least of all because the definition of what constitutes a zero-day is up for debate. A targeted email list alone does not constitute persistance. But whether or not the breach should get a popular label, congrats goes to RSA for giving me this opportunity to include a Spaceballs reference in my blog.

Surveillance of Drunks

The Sun suggests a good use for surveillance video in London — making fun of people who are impaired:

Our video of the rolling-drunk reveller tumbling acrobatically down stairs after a do at London’s posh Savoy Hotel has been a huge global hit after being posted here yesterday

Note the cameras monitor the subject’s movements across several different views without anyone entering to offer assistance. Should someone have responded? Was it real or just choreographed?

Facial Recognition on Facebook

I agree with this general assessment of Facebook

Brad Shimmin, an analyst with Current Analysis, said it’s clear that Facebook hasn’t learned any big lessons from its previous privacy brouhahas .

“Facebook’s repeated methodology of opting all users into new services, particularly services with potentially damaging ramifications, demonstrates a certain disregard for the security and privacy of its users,” Shimmin said.

There is no excuse for Facebook. They just fail and fail again. An opt-in system could be very easily advertised by them. What possible reason could they have to make it an opt-out?

The Facebook blog post does not hide the fact that they want their users to have to dig their way out of facial recognition software.

When you or a friend upload new photos, we use face recognition software (similar to that found in many photo editing tools) to match your new photos to other photos you’re tagged in. We group similar photos together and, whenever possible, suggest the name of the friend in the photos.

If for any reason you don’t want your name to be suggested, you will be able to disable suggested tags in your Privacy Settings. Just click “Customize Settings” and “Suggest photos of me to friends.” Your name will no longer be suggested in photo tags, though friends can still tag you manually.

What’s the supposed benefit of facial recognition technology on a social network platform? Let’s say you are the type of person who uploads a lot of photos of the same person…

Instead of typing her name 64 times, all you’ll need to do is click “Save”…

They are offering to save time for a certain type of user. It does not by any means justify an opt-out philosophy for automatically tagging everyone else, given the risk and privacy issues.

Google built but never launched a facial recognition service. The company was worried about its potential for abuse, says Google chairman Eric Schmidt.

Facebook’s system also brings to mind the problem of what happens if every face in every picture is the same? In other words how long before a clever artist builds a flashmob holding up masks with a picture of someone else to get it automatically tagged hundreds or even thousands of times?

This seems like the obvious answer and a great way to protest the opt-out:

Introducing the Mark Zuckerberg Halloween Mask

Now you too can look like the man who says his plan to “become a vegetarian” is killing and eating animals.