Lessons of Afghanistan: If you doubt Palantir, you’re probably right.

The buried lede in the story about Palantir’s role in Afghanistan is this sentence:

I knew his face. I doubted the computer. I was right.

If you doubt Palantir, you’re probably right.

In other words, the American company shamelessly built an overpriced and unaccountable “justice” system that tries to paint the world with an overly simplistic good/evil dichotomy.

How was the farmer on the tractor misrecognized as the cell leader in the purple hat in the first place? After the air strike was called off, and the man was spared execution, the PGSS operators rolled back the videotape to review what had happened. To see what they could learn.

“It was his hat,” Kevin explains. “There’s a window of time, around dawn, as the sun comes up,” he explains, where colors are “read differently” by the imaging system than how it sees them during the day. In this window of time, the farmer’s hat was misidentified as purple, setting off a series of linkages that were based on information that was erroneous to begin with.

But what if the S2 shop had killed the farmer in the purple hat in error? And what if, out of fear of backlash over yet another civilian casualty, the data that showed otherwise was deleted so that it would never become known? This invites the question: Who has control over Palantir’s Save or Delete buttons?

“Not me,” says Kevin. “That’s an S2 function.”

Kafka had warned everyone about this kind of thinking with his dystopia “The Trial“.

A computer mistaking the color of a hat due to lighting changes, in a secretive proprietary system… is an obvious recipe for expensive garbage just like it’s 1915 again.

If WWI seems forever ago and you prefer a 1968 reference, Afghanistan failures basically prove how Palantir is a god-awful failure (pun intended, they claim to offer “god mode”), much like the IGLOO WHITE disaster of the Vietnam War.

The problem with knowing history is you’re condemned to watch people repeat the worst mistakes.

This story about Palantir reminds me of another one from long ago:

In the early hours of September 26, 1983, the Soviet Union’s early-warning system detected the launch of intercontinental ballistic missiles from the United States. Stanislav Petrov, a forty-four-year-old duty officer, saw the warning. […] He reasoned that the probability of an attack on any given night was low—comparable, perhaps, to the probability of an equipment malfunction. Simultaneously, in judging the quality of the alert, he noticed that it was in some ways unconvincing. (Only five missiles had been detected—surely a first strike would be all-out?) He decided not to report the alert, and saved the world.

So does ignoring Palantir mean saving the world, or at least one “starfish“?

Maybe.

I’ve written and presented about these fancy and expensive tools spitting critical errors many times; who really knows how many people have been killed unjustly by failing to question the machine.

In 2016 I gave a talk showing how a system billed as “90% accurate” could be broken 100% of the time by doing simple color shifts, just like how it is has been described above breaking Palantir.

Since then I’ve continued to do it repeatedly… and what concerns me is how Palantir is completely closed and proprietary so independent experts can’t demonstrate how it’s a bunch of expensive junk (makes life and death decisions no better, or even worse) designed to put excessive power into the hands of a few men.


Update December 2022: the US Army is politely calling Palantir’s lock-in technology stack a pile of garbage (“unpopular“).

At the foundation of [our popular] strategy is standards and things that we can provide out to industry that enable their black boxes to plug in. And so it gets rid of a lot of the — ultimately all of the — vendor lock issues that we may have in parts of the architecture today.

German City Bans “legally highly problematic” Zoom

Data protection experts say despite high-profile promises from Zoom management to stop doing all the wrong things, the company still violates GDPR due to its handling of personal data.

More specifically, Ulrich Kühn, acting Hamburg Commissioner for DataProtection and Freedom of Information, published this sharp analysis (translated from German):

Public entities are particularly bound to comply with the law. It is more than regrettable such a formal step has to be taken. In the FHH all employees have access to a proven video conferencing tool that is unproblematic with regard to third-country transfers.

Dataport, as the central service provider, also provides additional video conferencing systems in its own data centers. These are used successfully in other states such as Schleswig-Holstein.

It is therefore incomprehensible why the Senate Chancellery insists on an additional and legally highly problematic system.

“Incomprehensible” why people choose Zoom? I suppose that’s like trying to comprehend why people would resort to violence over cabbage patch dolls.

His point boils down to some simple facts and basic reasoning. Why bother breaking the law to use Zoom when far better legally compliant (safer) options exist?

It probably has something to do with Chinese military intelligence… sorry, I meant Zoom knowing that the market has a predictable tendency to be vulnerable to herd thinking and low cognitive ability versus factual reasoning.

Enshittification of Tech: Tesla Ajar “Falcon” Door Hits London Bus

A video making the rounds on social media asks the simple question how a Tesla driver can ignore big red warning lights and “proceed with caution” text on the dashboard?

Perhaps the better question is why engineers fail to close the door as its wheels start moving (obviously with obstruction sensors to prevent crushing things, which I know is problematic for Tesla given its reputation for horrible obstruction sensing).

Or why don’t engineers prevent the car moving when a wing is open (let alone when a truck is crossing in front of it at a red light — see what I mean about obstruction sensing)?

Here’s the video in question:

Is there any real use case for driving with these doors open? The simplest and most elegant fix is the car can’t move with a wing door ajar.

I have yet to see anyone make the connection, for example, to this other recent video on Instagram of a Tesla driver very clearly on purpose keeping the doors ajar while driving.

Perhaps in both cases the cars are malfunctioning and unable to close the door?

Here’s a 2019 video of exactly this problem, foreshadowing the news today.

Oops, here’s a 2018 video of the same thing happening:

This is a different 2018 video of the same thing happening, right?

That, of course, came two years after a 2016 example of exactly this problem again:

In many of these cases it does seem like something is malfunctioning and the driver falsely believes one door closing means both are closed, which completely undermines the effect of warning systems.

Does anyone have a count? Seems under reported.

And I don’t mean this as a Tesla-only problem, just that (for pretending to be a “high” brand) they’re spectacularly worse than most at shipping garbage to a customer and ignoring the problems caused.

Here’s more context in what has been happening overall in the market of high-tech, rushed to release fancy gadgets that aren’t properly tested or held for quality control:

Just this week, for instance, the two Alexa-connected blinds in my bedroom failed to roll down at their scheduled time, and this morning, only one of them opened back up again. I have no idea what went wrong, because Alexa doesn’t offer any feedback when things fail, so all I could do was try again until the routine triggered properly. Those kinds of misfires are common in the smart home world. I’ve had Google Assistant refuse to set alarms or read upcoming calendar events for several days in a row, only to fix itself without explanation. My Ecobee thermostat occasionally gets stuck on a single temperature, requiring a reboot. I’ve had light bulbs inexplicably fail to connect to their hub device. And I’m pretty confident that every Echo speaker owner has experienced Alexa playing the wrong music at least once.”

These examples go on and on, yet most of them are marginal or disposable income things. Tesla is in a category where it supposed to provide an essential service, and it can seriously injure or kill people.