The buried lede in the story about Palantir’s role in Afghanistan is this sentence:
I knew his face. I doubted the computer. I was right.
If you doubt Palantir, you’re probably right.
In other words, the American company shamelessly built an overpriced and unaccountable “justice” system that tries to paint the world with an overly simplistic good/evil dichotomy.
How was the farmer on the tractor misrecognized as the cell leader in the purple hat in the first place? After the air strike was called off, and the man was spared execution, the PGSS operators rolled back the videotape to review what had happened. To see what they could learn.
“It was his hat,” Kevin explains. “There’s a window of time, around dawn, as the sun comes up,” he explains, where colors are “read differently” by the imaging system than how it sees them during the day. In this window of time, the farmer’s hat was misidentified as purple, setting off a series of linkages that were based on information that was erroneous to begin with.
But what if the S2 shop had killed the farmer in the purple hat in error? And what if, out of fear of backlash over yet another civilian casualty, the data that showed otherwise was deleted so that it would never become known? This invites the question: Who has control over Palantir’s Save or Delete buttons?
“Not me,” says Kevin. “That’s an S2 function.”
Kafka had warned everyone about this kind of thinking with his dystopia “The Trial“.
A computer mistaking the color of a hat due to lighting changes, in a secretive proprietary system… is an obvious recipe for expensive garbage just like it’s 1915 again.
If WWI seems forever ago and you prefer a 1968 reference, Afghanistan failures basically prove how Palantir is a god-awful failure (pun intended, they claim to offer “god mode”), much like the IGLOO WHITE disaster of the Vietnam War.
The problem with knowing history is you’re condemned to watch people repeat the worst mistakes.
This story about Palantir reminds me of another one from long ago:
In the early hours of September 26, 1983, the Soviet Union’s early-warning system detected the launch of intercontinental ballistic missiles from the United States. Stanislav Petrov, a forty-four-year-old duty officer, saw the warning. […] He reasoned that the probability of an attack on any given night was low—comparable, perhaps, to the probability of an equipment malfunction. Simultaneously, in judging the quality of the alert, he noticed that it was in some ways unconvincing. (Only five missiles had been detected—surely a first strike would be all-out?) He decided not to report the alert, and saved the world.
So does ignoring Palantir mean saving the world, or at least one “starfish“?
Maybe.
I’ve written and presented about these fancy and expensive tools spitting critical errors many times; who really knows how many people have been killed unjustly by failing to question the machine.
In 2016 I gave a talk showing how a system billed as “90% accurate” could be broken 100% of the time by doing simple color shifts, just like how it is has been described above breaking Palantir.
Since then I’ve continued to do it repeatedly… and what concerns me is how Palantir is completely closed and proprietary so independent experts can’t demonstrate how it’s a bunch of expensive junk (makes life and death decisions no better, or even worse) designed to put excessive power into the hands of a few men.
Update December 2022: the US Army is politely calling Palantir’s lock-in technology stack a pile of garbage (“unpopular“).
At the foundation of [our popular] strategy is standards and things that we can provide out to industry that enable their black boxes to plug in. And so it gets rid of a lot of the — ultimately all of the — vendor lock issues that we may have in parts of the architecture today.
You nailed that event where Palantir was wrong…. software less advanced than old manual decision…thank God the airstrike DIDN’T happen.
Does anyone have stories where Palantir did something well?
Excellent Journalism