New Research Suggests Game Theory Has Been Wrong All Along

You know that thing where US Air Force brass like to say that it was mutually assured destruction (MAD) that kept the world safe after WWII?

I know such “ultimatum” gaming is wrong generally (pun not intended) because of simple history, yet now there is even more evidence it won’t work based on research in other fields.

Modern conflicts of highly-distributed multi-cultural conflicts have to answer to the Machiquenga results “shaking up psychology and economics”:

…a vast amount of scholarly literature in the social sciences—particularly in economics and psychology—relied on the ultimatum game and similar experiments. At the heart of most of that research was the implicit assumption that the results revealed evolved psychological traits common to all humans, never mind that the test subjects were nearly always from the industrialized West. Henrich realized that if the Machiguenga results stood up, and if similar differences could be measured across other populations, this assumption of universality would have to be challenged.

Spoiler alert: the results stood up. It’s a long read, well worth your time.

If You Like Privacy, Then Love Apple Child Protection Measures

Update August 21: Patrick Walsh wrote a post called “unpopular opinion” that agrees with me, and makes almost exactly the same points:

I believe Apple is doing right here not simply because of what they’re doing but because of how they’re doing it, which I think is a responsible and privacy-preserving approach to an important issue.

He even explains in more graphic details why PTSD is a real problem in this space and thus why Apple’s solution is both privacy-preserving and reducing harms.

Update August 19: Nicholas Weaver explains why “hash collision” news is not news at all and the risks are low to non-existent.

The only real risk, says one security researcher, is that anyone who wanted to mess with Apple could flood the human reviewers with false-positives.

All the commentary alleging that “lives will be ruined” by a hash collision are tone-deaf and unrealistic hype. We know and have ample proof that children’s lives will be ruined without these Child Protection Measures.

However there is no meaningful technical or logical reason so far that demonstrates how any innocent adult’s life will be harmed in the process.

Don’t believe the hype. If you’ve read the over-sensationalized nonsense of Alex Stamos, Julian Assange or Edward Snowden you will recognize them in this explanation by a psychologist:

In our social media lives we too often seek intense reaction rather than positive or helpful reactions, leading us to privilege sensationalized content. We want more engagement, not necessarily more grounded facts, so we do what we need to do to garner attention in digital spaces.

Take as just one example how in April of 2018 Alex Stamos claimed to be responsible for security at Facebook yet it was “creating and storing facial recognition templates of 200,000 local users without proper consent” during his watch. That kind of moral failure and doing obvious privacy wrongs should get far more attention.



Let’s get one thing out of the way. Children are being exposed, their images taken cruelly and very serious harms are obvious. It’s awful. It’s criminal.

As someone who personally has investigated such crimes (and will never discuss details where or when), it is nothing you ever want to have to do… yet it is absolutely necessary work to protect victims and stop children’s lives being destroyed by lack of privacy.

Technical security controls are required, especially at scale. We know this as well as any similar control that has been around for decades reducing malware and spam.

Right?

And that’s not even to get into the very different fact that investigating malware or spam doesn’t cause PTSD.

Apple has done exactly the right thing with their Child Protection Measures and put some very old and tried controls in place at scale to help prevent serious rights’ violations — preserving privacy of children.

I am not saying that as an extremist on either end of the privacy versus knowledge spectrum. The middle path here (almost always denied in the excluded middle fallacy by privacy and knowledge extremists) is that we clearly need to have enough knowledge to preserve enough privacy.

Caring about privacy is NOT inconsistent with protecting society (more accurately protecting children) from exposure by investigating violations of their privacy.

Now, I hear in the news many in America are claiming to be upset at this news because they face “losing” some fake notion of “freedom” they had to deny children privacy, to illegally share and distribute known harmful photos of very young strangers.

Let me put this another way.

I know that America is only United Nations member failing to ratify the international treaty that protects its children. The Convention on the Rights of the Child defends every child’s right to survival, education, nurturing, and protection from violence and abuse.

Thus, of course we know why Americans are so up in arms about being blocked by Apple from violence and abuse of children. America remains the only country in the world that has been refusing to agree to do exactly that.

It’s a real problem in America because a small group of very powerful white men spend millions to investigate spam and malware when they think it might hurt someone’s wallet, yet very strongly draw the line and refuse to budge when it comes to protecting children from privacy abuse let alone death.

American tech executives in particular are being hypocritical when they literally block protection only when it is narrowly defined to benefit children.

They claim they care about privacy while denying it to children being harmfully exposed. They claim they care about freedom when denying to children being harmfully controlled.

Hany Farid captures this perfectly in his Newsweek take-down of a cruel and tone-deaf Facebook executive:

Will Cathcart, head of Facebook’s WhatsApp messaging app, had this to say about Apple’s announcement: “I read the information Apple put out yesterday and I’m concerned. I think this is the wrong approach and a setback for people’s privacy all over the world. People have asked if we’ll adopt this system for WhatsApp. The answer is no.” He continued with: “Apple has built software that can scan all the private photos on your phone—even photos you haven’t shared with anyone. That’s not privacy.”

These statements are misinformed, fear-mongering and hypocritical.

While Apple’s technology operates on a user’s device, it only does so for images that will be stored off the device, on Apple’s iCloud storage. Furthermore, while perceptual hashing, and Apple’s implementation in particular, matches images against known CSAM (and nothing else), no one—including Apple—can glean information about non-matching images. This type of technology can only reveal the presence of CSAM, while otherwise respecting user privacy.

Cathcart’s position is also hypocritical—his own WhatsApp performs a similar scanning of text messages to protect users against spam and malware. According to WhatsApp, the company “automatically performs checks to determine if a link is suspicious. To protect your privacy, these checks take place entirely on your device.” Cathcart seems to be comfortable protecting users against malware and spam, but uncomfortable using similar technologies to protect children from sexual abuse.

BOOM!

You couldn’t drop any harder hammer on Facebook management than Farid; proving their executive culture as willfully harmful, a company toxic to society.

Read this again and again:

WhatsApp performs a similar scanning of text messages to protect against spam and malware… but uncomfortable using similar technologies to protect children…

Farid goes on to make many excellent points in his clear-headed centrist article, and from a technical perspective they are all very well stated. He’s right.

However, he also omits the looming cultural factor here, perhaps because it is the biggest elephant to ever stand in the room of privacy.

Who really knows where to begin discussing it properly? I propose starting here: do not report on Apple’s move without mentioning America refusing to ratify rights of the child to protection from abuse.

Rugged individualism, libertarian nonsense of social Darwinism, is a bogus concept and should be called out for exactly what it is.

…social Darwinists hold beliefs that conflict with the principles of liberal democracy, and their vision of social life is not conducive to fostering a cooperative, egalitarian society.

Even worse is when it creeps into these discussions unchallenged by reporters purporting to be talking about privacy rights.

This is leadership in privacy by Apple and is entirely in-step with everything they have been saying and doing.

That’s the proper framing when we hear Americans jump to argue children don’t deserve even basic privacy protection that can save their lives, which Apple is finally bringing to market.

Now I’ll try to explain the technical aspects of the Apple system, which as I said is more a throw-back to old methods already used in many places.

When the big “cloud” services scan images, they typically generate a hash (a small digest of the image) and compare it to hashes of already known bad images (using what’s known as a “CSAM” database).

Apple is not doing this, instead pulling forward the much older model used in things like antivirus software. It pushes a blocklist to each client. That means hashes of known bad images are given to an Apple client so it can try to do a match locally when images are being pushed by the client to Apple’s iCloud.

Very 1980s? The local scan and match model is not new, and it’s not the first.

If a hash is locally matched with the blocklist (meaning a hash for a bad image matches the hash of an image being sent by the client to iCloud) Apple writes an encrypted “voucher” into an iCloud log with a wait state. When a threshold of these vouchers gets reached (said to be around 30) it will trigger decryption and a report for Apple human monitors to verify the bad images.

In other words, cloud companies scan your images in the cloud, which is not great for privacy, also not great for people having to look at bad images. Seriously, PTSD is a big part of this story that privacy extremists never seem to acknowledge even though you’d think it would fit their interests.

Apple however is bringing privacy enhancements to the client by shifting to a local copy of blocklist, and by using scans only if you upload to their iCloud, which is optional.

Having a local client generate a hash and match it locally, sending an encrypted log entry only when using an optional data store is NOT a back door by any reasonable definition.

Apple also is going to scan encrypted iMessages when a family account is set up, such that “pornographic” images are blurred when detected with a warning to children about safety that their parents will be notified if they continue.

This system uses image analysis on the device, known as a machine learning classifier. Of course such systems are flawed, like I’ve spoken about in almost every presentation of mine since at least 2012.

Don’t get too worried Apple will experience failure with machine learning. Everyone does. “AI” is fundamentally broken and that’s the much deeper issue. Their model admits and takes such risks somewhat into account (e.g. with an opt-in parental monitoring system where children have a dialog to bypass).

On that depressing note the latest research has in fact started to define trustworthy machine learning.

Again, NOT a back door by any reasonable definition.

Why would Apple work to protect the privacy of children? Why are they giving people more privacy by scanning on a client instead of in the cloud? Perhaps the answer is obvious?

It’s better privacy.

As a final thought, people often raise the fallacy of slippery slope on this topic. It goes something like “if we allow Apple to deploy technology that protects children from harm, what’s to stop the government next taking over our entire lives?”

Beside being a totally bogus fallacy argument that intentionally ignores all the middle steps, it also begs the question of why children are the line being drawn here. Where were all these advocates when client-side scanning by anti-virus was deployed and what are they saying about it today?

Anyone with insider information on anti-virus knows the messy connections into governments well, yet where’s the outrage and evidence of tyranny from overreach as a result?

We’ve seen almost nothing of the kind (and NO I am not going to tell you where there has been overreach because the fact that you don’t know just proves my point).

Ridicule as a Weapon: a Fate for Nazis Worse Than Death

A bayonet shoves Hitler’s book in front of a prisoner and says “Here, improve your mind!”. Source: “Donald in Nutziland”, Disney 1943.

In 2006 a special international communication draft was released by the applied studies program of The Institute of World Politics (IWP) called “Ridicule as a Weapon, White Paper No. 7“. It contained sharp analysis such as this:

…U.S. strategy includes undermining the political and psychological strengths of adversaries and enemies by employing ridicule as a standard operating tool of national strategy. Ridicule is an under-appreciated weapon not only against terrorists, but against weapons proliferators, despots, and international undesirables in general. Ridicule serves several purposes:
• Ridicule raises morale at home.
• Ridicule strips the enemy/adversary of his mystique and prestige.
• Ridicule erodes the enemy’s claim to justice.
• Ridicule eliminates the enemy’s image of invincibility.
• Directed properly at an enemy, ridicule can be a fate worse than death.

More precisely, it offers this applied context:

The Nazis and fascists required either adulation or fear; their leaders and their causes were vulnerable to well-aimed ridicule. […] Like many in Hollywood did at the time, the cartoon studios put their talent at the disposal of the war effort. Disney’s Donald Duck, in the 1942 short “Donald Duck In Nutziland” (retitled “Der Fuehrer’s Face”), won an Academy Award after the unhappy duck dreamed he was stuck in Nazi Germany.

And then it concludes with this suggestion:

U.S. policymakers must incorporate ridicule into their strategic thinking. Ridicule is a tool that they can use without trying to control. It exists naturally in its native environments in ways beneficial to the interests of the nation and cause of freedom. Its practitioners are natural allies, even if we do not always appreciate what they say or how they say it. The United States need do little more than give them publicity and play on its official and semi-official global radio, TV and Internet media, and help them become
“discovered.” And it should be relentless about it.

And for what it’s worth John Lenczowski, a National Security Council staffer under President Ronald Reagan, founded the IWP.

A modern and somewhat nuanced take on what this all means today is captured in a new talk by General Glen VanHerck, head of US Northern Command:

“Rather than primarily focusing on kinetic defeat, for the defense of the homeland, I think we must get further left,” VanHerck told an audience at the Space and Missile Defense Symposium. “Deterrence is establishing competition by using all levers of influence as I conveyed, and most importantly, the proper use of the information space to demonstrate the will, the capability, the resiliency, and the readiness by creating doubt in any potential adversaries mind that they can ever be successful by striking our homeland.”

Putting “doubt in any potential adversaries mind that they can ever be successful”… is to ridicule them, as Rommel found out the hard way when he quickly lost all his potential to be an adversary.

Racism at Tesla Might Explain Why Their “Autopilot” Crashes So Often

If you already know an infamous company is run by a white man from South Africa who grew up wealthy during apartheid (hint: his claims of $100K in debt at an early age are admitting his privilege and leverage, also known as credit)… then you might not be surprised to hear that racism is a well-documented and ongoing problem:

An ex-Tesla employee who worked at the Fremont factory for about two years said in a sworn declaration in the Vaughn case that he had heard the “N-word” used at least 100 times by co-workers and that Black and White employees alike referred to the factory as “the plantation” or “slaveship.”

This sentence in particular calls out Tesla’s denials as lacking credibility:

Arbitrations aren’t usually made public, but Bloomberg News reports that court filings reveal that Rushing found Berry’s allegations more credible than Tesla’s denials.

The courts ruled not only that Tesla has a racism problem, but that management has repeatedly failed to act on it to prevent harm/disaster. There seems to be a learning problem.

In a related story, the people who believed in Tesla’s “autopilot” lies are shocked, so shocked, to find out that they were sold a pack of lies — Tesla cars fail to “learn”, such that multiple owners have been crashing into the same hazards over and over again.

An avid Tesla customer, he has previously owned a 2016 Model S P90D and a 2016 Model X. What’s also concerning is that he noted that “several calls and emails” to Tesla about the latest crash in Yosemite haven’t been returned with much interest from the automaker. He hoped that the repair costs would be covered and a loaner car supplied. With five Teslas crashing in the same spot, there is clearly an issue with the use of the driver-assist tech on this particular road and one can only hope Tesla takes it seriously before another accident occurs.

Five crashed cars into the same exact spot. And Tesla still wants you to believe that its cars are “learning”? What if they aren’t learning at all, but the CEO is learning just how far people will follow his advice to their own demise?

In my presentations since at least 2016 I have repeatedly tried to raise awareness about this class of failure. Simply put, if your Tesla is designed to calculate 2+2=4 a billion times, that doesn’t mean it can do basic geometry right, let alone subtraction.

Don’t be fooled by Tesla’s false marketing. Lots of miles, even lots of cars, doesn’t mean it’s prepared for anything.

And why would an “avid Tesla customer” think repair costs for his crashed car should be covered? That definitely smells like someone has been drinking the clown Kool-Aid.

Does this owner not understand how the Tesla scam works?

Getting Tesla to care about its customers would require some kind of government intervention to protect against… obvious fraud and abuse.

So back to the first point, what if poor engineering quality with disdain for human safety is mixed with uncontrolled racism?

You get a Tesla.

Racism is literally repeating mistakes from the past, an inability to learn.

The car’s “driverless” system, just like the car manufacturer’s management, exhibit this trait of repeatedly failing to get past the same basic collision and harm.

See the similarities?

Being “blind to race” is like being blind while driving.

So is owning a Tesla any different from wearing a blood diamond or an endangered ivory bracelet — being blind to the harms? More to the point what if Tesla is just the reformulation of blood diamond thinking out of South Africa?

A heavy survivor-bias means their corrupted definition of “learning” manifests cruelly in sending children to their death in mines (now naive drivers into traffic and dangerous roads) so that a few elites who never take risk themselves can say they own the nicest things.

Zero accountability.

Zero humanity.

Unfortunately many people seem unable to see the obvious symptoms of failure here, as they instead cook up strange ways to issue apologizes while promoting harm.

Take this quote, for example, from the same article above that concluded bizarrely “one can only hope Tesla takes it seriously before another accident occurs”.

Sometimes the technology does its job admirably, like when a drunk driver passed out behind the wheel and his Tesla successfully brought the car to a stop, averting almost certain disaster.

Why should we expect Tesla take any risk seriously when they have built a profit model on overstating capability and then ignoring predictable disasters?

It reminds me of people saying how lucky blacks in America were to be slaves because emancipation meant being dropped into unfair competition with racist whites who keep all the power, deny blacks healthcare and education, and refuse to pay fair wages.

It is NOT an admirable job if the technology encourages drivers to drink and then fraudulently shift blame to their car. There’s nothing to say disaster would be averted (Tesla data in fact shows more accidents happen in cars that add their “driverless” feature).

It could also have crashed, so perhaps the real story is that this is a case where Tesla didn’t fail as badly as should be expected. Never forget that basic “lane assist” is not real driverless.

When this driver was pulled out of his car by police he tried to argue since the CEO of Tesla says the car drives itself therefore a drunk Tesla driver can’t be charged for actually driving.

Does that sound like Tesla is taking safety seriously to prevent another accident? And perhaps more interestingly, does it sound like whites saying they can’t be blamed for apartheid?