I love news stories like the one in the NYT called “A Russian A.T.M. With an Ear for the Truth”
The machine scans a passport, records fingerprints and takes a three-dimensional scan for facial recognition. And it uses voice-analysis software to help assess whether the person is truthfully answering questions that include “Are you employed?†and “At this moment, do you have any other outstanding loans?â€
Only an ear for truth? Now if they could add eyes to tell if a person is talking or just playing back a recording. How random are the questions? Would they prevent someone from using a replay of a stored voice signature?
Sberbank says that to comply with the part of the privacy law that would prohibit a company from keeping a database of customers’ voice signatures, the bank plans to store customers’ voice prints on chips contained in their credit cards.
Stored how, and for how long, and how do you update it?
And how would this work with someone who is mute?
Another interesting case would be for a relative or other accomplice to answer the voice tests on behalf of the applicant. Can the system detect a woman’s voice for a male applicant, an old voice for a young applicant…?
Perhaps the most startling aspect to the story is how the company working on the technology does not understand the privacy implications.
“We are not violating a client’s privacy,†[Mr. Orlovsky, the Sberbank executive] said. “We are not climbing into the client’s brain. We aren’t invading their personal lives. We are just trying to find out if they are telling the truth. I don’t see any reason to be alarmed.â€
Privacy violations do not require “climbing into the client’s brain” and they do not require “invading their personal lives”. Those are bogus tests. They involve collecting personal information (i.e. a voice signature) and failing to protect it from unauthorized disclosure.