Injustice Robots: Real and Present Danger of Police Overconfidence in AI

This New Yorker story about unjustified overconfidence in AI — expensive and flashy policing toys technology — reminds me of the trouble with radar detectors.

…technology has only grown more ubiquitous, not least because selling it is a lucrative business, and A.I. companies have successfully persuaded law-enforcement agencies to become customers. […] Last fall, a man named Randal Quran Reid was arrested for two acts of credit-card fraud in Louisiana that he did not commit. The warrant didn’t mention that a facial-recognition search had made him a suspect. Reid discovered this fact only after his lawyer heard an officer refer to him as a “positive match” for the thief. Reid was in jail for six days and his family spent thousands of dollars in legal fees before learning about the misidentification, which had resulted from a search done by a police department under contract with Clearview AI. So much for being “100% accurate.”

You think that’s bad?

Imagine how many people since the 1960s in America have tangled into fines, jail or even being killed due to inaccurate and unreliable “velocity” sensors or “plate recognition” used for racial profiling law enforcement. The police know about technology flaws, and judges too, yet far too often they treat their heavy investments in poorly measured and irregularly operated technology as infallible.

They also have some clever court rules to protect their players in the game. For example, try walking into a court and saying this:

maximum acceleration = velocity accuracy / sample time

amax = Maximum Acceleration
±vacc = Velocity Accuracy
ti = Sample Time

amax = ± vacc / ti

A speed sensor typically measures velocity of an object traveling a set distance (between “gates” that are within range of the sensor). Only targets within these parameters will have a fair detection or reading.

…accelerations must not be neglected in the along-track velocity estimation step if accurate estimates are required.

If a radar sensor samples every second, a velocity change greater than 1.0 mph can exceed a limit to accurately read. A half-second sample would be a 2.0 mph change limit. A quarter-second sample would be a 4.0 mph change limit, and so forth.

What?

In other words, you step up to the judge and tell them their beloved expensive police toy technology is unable to measure vehicle velocity when it changes faster than a known calculated limit of a radar device, which is a problem especially pronounced around common road curves and with vehicle angles (e.g. “cosine effect” popularized in school math exams).

Any trustworthy court assessment would take a look at radar specs and acceleration risk to the sensor …to which the judge might spit their chew into a bucket and say “listen here Mr. smarty-math-pants big-city slicker from out-of-town, you didn’t register with our very nice and welcoming court here as an expert, therefore you are very rude and nothing you say can be heard here! Our machine says you areGUILTY!” as they throw out any or all evidence that proves technology can be wrong.

Source: “Traffic Monitoring with SAR: Implications of Target Acceleration”, Microwaves and Radar Institute, DLR, Germany

Not saying this actual court exchange really happened in rural America, or that I gave a 2014 BlackHat talk about this happening (to warn that big data systems are highly vulnerable to breaches of integrity), but… anyway, have you seen Blazing Saddles?

“Nobody move or the N* gets it!”

It’s like saying guns don’t kill people, AI with guns kill people.

AI is just technology and it makes everything worse if we allow it to escape the fundamental social sciences of where and how people apply technology.

Fast forward (pun not intended) and my warnings from 2014 big data security talks have implications to things like “falsification methods to reveal safety flaws in adaptive cruise control (ACC) systems of automated vehicles”.

…we present two novel falsification methods to reveal safety flaws in adaptive cruise control (ACC) systems of automated vehicles. Our methods use rapidly-exploring random trees to generate motions for a leading vehicle such that the ACC under test causes a rear-end collision.

Falsification in AI safety literally has become a dangerous life and death crisis over the past decade, with some (arguably racist) robots already killing over 40 people.

Dead. Killed by AI.

Cars don’t kill people, AI in cars kill people. In fact, since applying AI to cars in a rush to put robots in charge of life or death decisions, Tesla has killed more people in a few short years than all people killed by all robots in history.

That’s a fact, as we recently published in The Atlantic. Predictable disaster, I say, because I warned about exactly this result for the past ten years (e.g. 2016 Ground Truth Keynote presentation at BSidesLV). Perhaps all these deaths are evidence of what courts now refer to as product “harm by design” due to a documented racist and antisemite.

Look at how the NHTSA frames the safety of radar sensors for police use in their Conforming Product List (CPL) Speed-Measuring Devices to maintain trust in the technology:

…performance specifications ensure the devices are accurate and reliable when properly operated and maintained…

Specifications. Proper operation.

Show me the comparable setup from NIST for a conforming list of AI image reading devices used by police, not to mention definitions of proper operation.

Let’s face it (pun not intended), any AI solution based on sensor data of any kind including cameras should have come under the same scrutiny as other reviews (human or machine) of sensor data, to avoid repeating all the inexcusable rookie mistakes injustices by overconfident technology-laden police over several prior decades.

And on that note, the police should expect to be severely harmed by AI themselves in careless operation.

Cluster of testicular cancer in police officers exposed to hand-held radar

Where are all the social scientists when you need them?

“No warning came with my radar gun telling me that this type of radiation has been shown to cause all types of health problems including cancer,” [police Officer] Malcolm said. “If I had been an informed user I could have helped protect myself. I am not a scientist but a victim of a lack of communication and regulation.” […] “We’re putting a lot of people at risk unnecessarily,” [Senator] Dodd said. “The work of police officers is already dangerous, and officers should not have to worry about the safety of the equipment they use.”

Which reminds me of the police officers who have been suing gun manufacturers over a lack of safety. You’d think, given the track record of high risk technology in law enforcement, no police department in their right mind would apply any AI to their work without clear and tested safety regulations. If you find any police department foolishly buying the notoriously deadly AI of Tesla, for example, they are headed directly into a tragic world of injustice.

Judge finds ‘reasonable evidence’ Tesla knew self-driving tech was defective

Elon Musk Says Some of His Best Friends Are Jews

Well, does anyone really have any doubts now about Elon Musk being antisemitic?

First, read this 1936 book on why nobody ever should say some of their friends are Jews in response to accusations of antisemitism.

Robert Gessner (1907-1968) was a Jewish American screenwriter and author born in Michigan. He had a BA from University of Michigan in 1929 and then a MA from Columbia University in 1930. New York University immediately hired him to teach. He traveled to several European countries in 1934, taking photographs and filming. Gessner in 1936 published a book about these journeys, in which he explicitly warned of the Nazi threat in Europe.

That phrase, it’s a dire warning. It’s a well known phrase of antisemitism associated with Nazi Germany.

Second, read this 2014 book for the update.

A book about how some things apparently haven’t changed.

Still, to this day, we see a well known (and researched), unmistakable phrase of antisemitism.

Third, note the phrase chosen by the man increasingly becoming known for… his antisemitism.

“I’m aware of that old sort of trope of like, you know, ‘I have a Jewish friend,’” Musk said. “I don’t have a Jewish friend. I think probably, I have twice as many Jewish friends as non-Jewish friends. That’s why I think I like to think I’m Jewish basically.”

A twist.

He says he can avoid the trope, then plows straight into it by implying some of his best friends are Jews by hinting at having “numbers”. Then he clumsily erases his friends’ Jewish identities by claiming he is “basically” them, as if unclear (perhaps revealing his deeper thought “my best friends are me“).

This is evidence of the lazy and arrogant antisemite who doesn’t even try to avoid the most glaringly obvious mistakes of history.

Here’s how one youthful antisemite of the 1930s explained the common hypocricy in their entire family’s devotion to Nazism.

For as long as we could remember, the adults had lived in this contradictory way with complete unconcern. One was friendly with individual Jews whom one liked, just as one was friendly as a Protestant with individual Catholics. But while it occurred to nobody to be ideologically hostile to the Catholics, one was, utterly, to the Jews. In all this no one seemed to worry about the fact that they had no clear idea of who “the Jews” were.

Further reading:

Related: How racist is Elon Musk? The federal government is suing Elon Musk because of extensively documented racism.

The More Driverless Cars Deployed The Less People Want Them

From 2012 I warned here and in talks that the biggest and most significant problem in big data security was integrity. The LLM zealots didn’t listen.

By 2016 in the security conference circuit I was delivering a series of talks about driverless cars being a huge looming threat to pedestrian safety.

Eventually, with Uber and Tesla both killing pedestrians in April 2018, I warned that the move to such low quality robots on the road would increase conflict and fatalities even more instead of helping safety.

Well, as you might guess from my failure to slow LLM breaches, I had little to no impact on the people in charge of regulating driverless engineering; nowhere near enough influence to stop predictable disasters.

It’s especially frustrating to now read that the NHTSA, which was politically corrupted by Tesla in 2016 to ignore robot safety and hide deaths, is still so poorly prepared to prevent driverless causing pedestrian deaths.

Feds Have No Idea How Many Times Cruise Driverless Cars Hit Pedestrians

Speaking of data, confidence in driverless has continued to fall as evidence rolls in, which is a classic product management dilemma of where and how to field safety reports. Welcome to 2012? We could have avoided so much suffering and loss.

Great Disasters of Machine Learning: Predicting Titanic Events in Our Oceans of Math

Federal Judge Rules First Amendment Doesn’t Protect “Harm by Design”

A court case in America today about online security stems from a decision in 1980 under Ronald Reagan to knowingly expose children to harmful products, which he reaffirmed in 1988 with bogus framing about the Constitution.

…the Constitution simply does not empower the Federal Government to oversee the programming decisions of broadcasters in the manner prescribed by this bill. […] It would inhibit broadcasters from offering innovative programs that do not fit neatly into regulatory categories and discourage the creation of programs that might not satisfy the tastes of agency officials responsible for considering license renewals. […] The bill’s limitation on advertising revenue for certain types of programming places the Federal Government in the inappropriate position of favoring certain kinds of programming over others.

If it sounds crazy, that’s because it was CRAZY.

By the late 1970s, concern about advertising to kids had grown so strong that a Federal Trade Commission taskforce took on the question about whether to ban or regulate this onslaught of marketing. Sixty thousand pages of expert testimony and 6,000 pages of oral testimony from leading experts on health, child psychology, and nutrition followed. The conclusions were clear: kids can’t distinguish between programs and commercials. As the report published at the time put it: “very young children are cognitively unable to understand the selling intent of ads.” Experts argued that these findings provided strong legal ground for special protections for children. […] Everything changed in 1980. As one of his first moves of his presidency, Reagan appointed a new FTC chairman, one more interested in pleasing business than parents. Within a year, the proposals were killed. What’s worse, Congress passed the Federal Trade Commission Improvement Act, which, Westen says, “mandated that the FTC would no longer have any authority whatsoever to regulate advertising and marketing to children, leaving markets virtually free to target kids as they saw fit.”

Ronald Reagan, one of the most corrupt and hate-mongering racist Presidents in American history (a very tough goal to achieve), used double-speak to say the government protecting children from harms would be inappropriate because it was “favoring certain kinds of programming”.

Yeah, people who cared about children were favoring safe programming. Safety for children comes from special protections, which is a very useful and common role for government that should surprise exactly nobody. It’s not only appropriate governance, it’s exactly what limitations on advertising should have ALWAYS been about.

Meanwhile, the political director of Ronald Reagan’s campaign described the heart of their platform as favoring certain kinds of people…

…all these things you’re talking about are totally economic things and a byproduct of them is, blacks get hurt worse than whites.

The predictable trajectory from these angry racist white men gutting societal safety laws, effectively unleashing ruthless corporations to prey on the vulnerable, has landed on families suffering the worst consequences. Grieving parents and friends of the alarming number of injured and dead children now question why American companies were unleashed to peddle such products harmful by design.

Last week, the families in the case received a powerful boost when a federal judge ruled that the companies could not use the First Amendment of the US constitution, which protects freedom of speech, to block the action.

The judge ruled that, for example, a lack of “robust” age verification and poor parental controls, as the families argue, are not issues of freedom of expression.

Lawyers for the families called it a “significant victory”.

Companies delivering and curating content with an intentional lack of safety from harms are deemed to not be protected by claims of a Constitutional freedom of expression.

…the story of Molly Russell, from north-west London, who took her own life after being exposed to a stream of negative, depressing content on Instagram. An inquest into her death found she died “while suffering from depression and the negative effects of online content”.

This ruling strikes directly at the heart of cold and cruel binary/oppositional calculus of Reaganism, where his “shining hill” dog-whistle of tyranny has always meant Blacks are meant to get hurt worse than whites, women are meant to get hurt worse than men, Muslims are meant to get hurt worse than Christians, children…

It reminds me of another judge in 2019 who said Americans could be protected from online domestic terror groups, leaning on the idea that their hate speech is physical harassment.

And for that matter, I’m sure many of you are reading the news that “intentional incitement of the Jan. 6 marauders overcame any free-speech claim”.