The MIT Technology Review has a very naive feel-good story about “AI to let a single voice through” that doesn’t mention anything about the famous and decorated 1960s or 1970s spy-thrillers let alone prior hyper-focused directional microphones.
MIT instead should have written more about how our identity can be seen as a collection of things, such that a high-fidelity sensor for AI is able to ingest a complex signal and then maintain a “private” connection between two points better than static and manual credentials. That’s an arguably positive development, as it suggests pre-shared keys could be switched to instant yet more sophisticated credentialed communications.
I mean the upside to presenting a secret for ID is that it’s disconnected from our true selves. And the downside to presenting a secret for ID is that it’s disconnected from our true selves. Which is preferred and when? It depends, so technology and tools are better when they provide options for our complicated world.
Or let me ask this a different way. Do you want to hear a specific person in this crowd? Is it a person with A, B and C attributes?
Either you setup a secret sharing system to manage that connection or… you can imagine using AI with a microphone to train on a particular Chinese woman’s voice and then alert you to her presence in any crowd while translating her speech to English.
What if a Sheriff could detect someone who is young, someone who is also Black, someone who is also male… and classify their speech as “threatening” to unleash extra-judicial assassination?
Who gets authorized for such massive scale micro listening accuracy, and for what purposes? And what could be more spooky?
It’s reminiscent of the infamous IGLOO WHITE project (sensors meant to detect men walking in the jungle and relay their position to close air support), which cost over a billion dollars a year in 1968 yet still didn’t achieve mission success. Are we there yet?
In other words, imagine a radio in a combat zone keeping contact with a single voice because attributes, where authentication is based on that voice being measured for its particular uniqueness and entropy.
And now imagine the countermeasures. One obvious safety device would be to require a secret to “key in” on a particular voice. Like the patented “PIN login” method I invented in 2006, for tens of millions of users of Internet devices at that time, commercial versions of this AI product could be prevented from detecting a voice unless it also had a pre-shared secret to authorize such a “connection”. No key, no key in.
The MIT article thus seems very lacking because it blindly reports about AI product development, really about academic research, that is within a very old technology space. It lacks any of the proper context of real-world markets (and dangers), as if ethics are somehow a separate consideration than the main one.
They could help wearers focus on specific voices in noisy environments, such as a friend in a crowd or a tour guide amid the urban hubbub.
Yeah, sure MIT, “such as a friend” or a tour guide. Positive thinking. Not such as… a target for harm.
The MIT author even uses the word “target” yet fails to mention any of the history and philosophy of targeted identity management and private point-to-point communication risks, let alone why military intelligence is desperate to game obfuscation in order to pinpoint targets for spying or even… targeted assassination by drone.
Have you read the 2020 book “First Platoon: A Story of Modern War in the Age of Identity Dominance” by journalist Annie Jacobsen?
https://books.google.com/books/about/First_Platoon.html?id=YLaPEAAAQBAJ&source=kp_book_description
It discusses in part integration of biometric tracking technologies into targeting decisions in Afghanistan per US military field manual… and references an important law article on targeting called “Biometric Cyberintelligence and the Posse Comitatus Act“.
https://scholarship.law.wm.edu/cgi/viewcontent.cgi?article=3143