COVID-19 vaccines rated “best overall pharmaceuticals on the market in any class”

A very good summary of the COVID-19 Vaccine has this paragraph buried in the seventh section, under “preventing disease and death”.

The vaccine shows an 8-fold reduction in the development of any symptomatic disease secondary to delta. For hospitalization, it is a 25-fold reduction. That’s 25 times! Remarkable. For death, it is also 25 times! This is a very effective pharmaceutical class when looking at overall efficacy toward the intended/expected purpose. When looking at the very tiny side effect profile, I’d personally consider it one of the best overall pharmaceuticals on the market in any class of drugs.

When people ask what the long-term effects of the vaccine are, I get the feeling this guy would tell them “you’re not dead and the people you know and love are not dead”.

If someone in America wanted to market this more successfully, they would have to call it something like the Rambo patriotic personal health defense system.

That cartoonist almost gets it right, since it really should say something like “it’s for protecting against the spread of evil”.

Speaking of graphic illustration, here’s August 2021 data from all the reporting states in America (less than half — Florida and Texas obviously not reporting). Left side shows fully vaccinated (click to enlarge):

New Research Suggests Game Theory Has Been Wrong All Along

You know that thing where US Air Force brass like to say that it was mutually assured destruction (MAD) that kept the world safe after WWII?

I know such “ultimatum” gaming is wrong generally (pun not intended) because of simple history, yet now there is even more evidence it won’t work based on research in other fields.

Modern conflicts of highly-distributed multi-cultural conflicts have to answer to the Machiquenga results “shaking up psychology and economics”:

…a vast amount of scholarly literature in the social sciences—particularly in economics and psychology—relied on the ultimatum game and similar experiments. At the heart of most of that research was the implicit assumption that the results revealed evolved psychological traits common to all humans, never mind that the test subjects were nearly always from the industrialized West. Henrich realized that if the Machiguenga results stood up, and if similar differences could be measured across other populations, this assumption of universality would have to be challenged.

Spoiler alert: the results stood up. It’s a long read, well worth your time.

If You Like Privacy, Then Love Apple Child Protection Measures

Update August 21: Patrick Walsh wrote a post called “unpopular opinion” that agrees with me, and makes almost exactly the same points:

I believe Apple is doing right here not simply because of what they’re doing but because of how they’re doing it, which I think is a responsible and privacy-preserving approach to an important issue.

He even explains in more graphic details why PTSD is a real problem in this space and thus why Apple’s solution is both privacy-preserving and reducing harms.

Update August 19: Nicholas Weaver explains why “hash collision” news is not news at all and the risks are low to non-existent.

The only real risk, says one security researcher, is that anyone who wanted to mess with Apple could flood the human reviewers with false-positives.

All the commentary alleging that “lives will be ruined” by a hash collision are tone-deaf and unrealistic hype. We know and have ample proof that children’s lives will be ruined without these Child Protection Measures.

However there is no meaningful technical or logical reason so far that demonstrates how any innocent adult’s life will be harmed in the process.

Don’t believe the hype. If you’ve read the over-sensationalized nonsense of Alex Stamos, Julian Assange or Edward Snowden you will recognize them in this explanation by a psychologist:

In our social media lives we too often seek intense reaction rather than positive or helpful reactions, leading us to privilege sensationalized content. We want more engagement, not necessarily more grounded facts, so we do what we need to do to garner attention in digital spaces.

Take as just one example how in April of 2018 Alex Stamos claimed to be responsible for security at Facebook yet it was “creating and storing facial recognition templates of 200,000 local users without proper consent” during his watch. That kind of moral failure and doing obvious privacy wrongs should get far more attention.



Let’s get one thing out of the way. Children are being exposed, their images taken cruelly and very serious harms are obvious. It’s awful. It’s criminal.

As someone who personally has investigated such crimes (and will never discuss details where or when), it is nothing you ever want to have to do… yet it is absolutely necessary work to protect victims and stop children’s lives being destroyed by lack of privacy.

Technical security controls are required, especially at scale. We know this as well as any similar control that has been around for decades reducing malware and spam.

Right?

And that’s not even to get into the very different fact that investigating malware or spam doesn’t cause PTSD.

Apple has done exactly the right thing with their Child Protection Measures and put some very old and tried controls in place at scale to help prevent serious rights’ violations — preserving privacy of children.

I am not saying that as an extremist on either end of the privacy versus knowledge spectrum. The middle path here (almost always denied in the excluded middle fallacy by privacy and knowledge extremists) is that we clearly need to have enough knowledge to preserve enough privacy.

Caring about privacy is NOT inconsistent with protecting society (more accurately protecting children) from exposure by investigating violations of their privacy.

Now, I hear in the news many in America are claiming to be upset at this news because they face “losing” some fake notion of “freedom” they had to deny children privacy, to illegally share and distribute known harmful photos of very young strangers.

Let me put this another way.

I know that America is only United Nations member failing to ratify the international treaty that protects its children. The Convention on the Rights of the Child defends every child’s right to survival, education, nurturing, and protection from violence and abuse.

Thus, of course we know why Americans are so up in arms about being blocked by Apple from violence and abuse of children. America remains the only country in the world that has been refusing to agree to do exactly that.

It’s a real problem in America because a small group of very powerful white men spend millions to investigate spam and malware when they think it might hurt someone’s wallet, yet very strongly draw the line and refuse to budge when it comes to protecting children from privacy abuse let alone death.

American tech executives in particular are being hypocritical when they literally block protection only when it is narrowly defined to benefit children.

They claim they care about privacy while denying it to children being harmfully exposed. They claim they care about freedom when denying to children being harmfully controlled.

Hany Farid captures this perfectly in his Newsweek take-down of a cruel and tone-deaf Facebook executive:

Will Cathcart, head of Facebook’s WhatsApp messaging app, had this to say about Apple’s announcement: “I read the information Apple put out yesterday and I’m concerned. I think this is the wrong approach and a setback for people’s privacy all over the world. People have asked if we’ll adopt this system for WhatsApp. The answer is no.” He continued with: “Apple has built software that can scan all the private photos on your phone—even photos you haven’t shared with anyone. That’s not privacy.”

These statements are misinformed, fear-mongering and hypocritical.

While Apple’s technology operates on a user’s device, it only does so for images that will be stored off the device, on Apple’s iCloud storage. Furthermore, while perceptual hashing, and Apple’s implementation in particular, matches images against known CSAM (and nothing else), no one—including Apple—can glean information about non-matching images. This type of technology can only reveal the presence of CSAM, while otherwise respecting user privacy.

Cathcart’s position is also hypocritical—his own WhatsApp performs a similar scanning of text messages to protect users against spam and malware. According to WhatsApp, the company “automatically performs checks to determine if a link is suspicious. To protect your privacy, these checks take place entirely on your device.” Cathcart seems to be comfortable protecting users against malware and spam, but uncomfortable using similar technologies to protect children from sexual abuse.

BOOM!

You couldn’t drop any harder hammer on Facebook management than Farid; proving their executive culture as willfully harmful, a company toxic to society.

Read this again and again:

WhatsApp performs a similar scanning of text messages to protect against spam and malware… but uncomfortable using similar technologies to protect children…

Farid goes on to make many excellent points in his clear-headed centrist article, and from a technical perspective they are all very well stated. He’s right.

However, he also omits the looming cultural factor here, perhaps because it is the biggest elephant to ever stand in the room of privacy.

Who really knows where to begin discussing it properly? I propose starting here: do not report on Apple’s move without mentioning America refusing to ratify rights of the child to protection from abuse.

Rugged individualism, libertarian nonsense of social Darwinism, is a bogus concept and should be called out for exactly what it is.

…social Darwinists hold beliefs that conflict with the principles of liberal democracy, and their vision of social life is not conducive to fostering a cooperative, egalitarian society.

Even worse is when it creeps into these discussions unchallenged by reporters purporting to be talking about privacy rights.

This is leadership in privacy by Apple and is entirely in-step with everything they have been saying and doing.

That’s the proper framing when we hear Americans jump to argue children don’t deserve even basic privacy protection that can save their lives, which Apple is finally bringing to market.

Now I’ll try to explain the technical aspects of the Apple system, which as I said is more a throw-back to old methods already used in many places.

When the big “cloud” services scan images, they typically generate a hash (a small digest of the image) and compare it to hashes of already known bad images (using what’s known as a “CSAM” database).

Apple is not doing this, instead pulling forward the much older model used in things like antivirus software. It pushes a blocklist to each client. That means hashes of known bad images are given to an Apple client so it can try to do a match locally when images are being pushed by the client to Apple’s iCloud.

Very 1980s? The local scan and match model is not new, and it’s not the first.

If a hash is locally matched with the blocklist (meaning a hash for a bad image matches the hash of an image being sent by the client to iCloud) Apple writes an encrypted “voucher” into an iCloud log with a wait state. When a threshold of these vouchers gets reached (said to be around 30) it will trigger decryption and a report for Apple human monitors to verify the bad images.

In other words, cloud companies scan your images in the cloud, which is not great for privacy, also not great for people having to look at bad images. Seriously, PTSD is a big part of this story that privacy extremists never seem to acknowledge even though you’d think it would fit their interests.

Apple however is bringing privacy enhancements to the client by shifting to a local copy of blocklist, and by using scans only if you upload to their iCloud, which is optional.

Having a local client generate a hash and match it locally, sending an encrypted log entry only when using an optional data store is NOT a back door by any reasonable definition.

Apple also is going to scan encrypted iMessages when a family account is set up, such that “pornographic” images are blurred when detected with a warning to children about safety that their parents will be notified if they continue.

This system uses image analysis on the device, known as a machine learning classifier. Of course such systems are flawed, like I’ve spoken about in almost every presentation of mine since at least 2012.

Don’t get too worried Apple will experience failure with machine learning. Everyone does. “AI” is fundamentally broken and that’s the much deeper issue. Their model admits and takes such risks somewhat into account (e.g. with an opt-in parental monitoring system where children have a dialog to bypass).

On that depressing note the latest research has in fact started to define trustworthy machine learning.

Again, NOT a back door by any reasonable definition.

Why would Apple work to protect the privacy of children? Why are they giving people more privacy by scanning on a client instead of in the cloud? Perhaps the answer is obvious?

It’s better privacy.

As a final thought, people often raise the fallacy of slippery slope on this topic. It goes something like “if we allow Apple to deploy technology that protects children from harm, what’s to stop the government next taking over our entire lives?”

Beside being a totally bogus fallacy argument that intentionally ignores all the middle steps, it also begs the question of why children are the line being drawn here. Where were all these advocates when client-side scanning by anti-virus was deployed and what are they saying about it today?

Anyone with insider information on anti-virus knows the messy connections into governments well, yet where’s the outrage and evidence of tyranny from overreach as a result?

We’ve seen almost nothing of the kind (and NO I am not going to tell you where there has been overreach because the fact that you don’t know just proves my point).

Ridicule as a Weapon: a Fate for Nazis Worse Than Death

A bayonet shoves Hitler’s book in front of a prisoner and says “Here, improve your mind!”. Source: “Donald in Nutziland”, Disney 1943.

In 2006 a special international communication draft was released by the applied studies program of The Institute of World Politics (IWP) called “Ridicule as a Weapon, White Paper No. 7“. It contained sharp analysis such as this:

…U.S. strategy includes undermining the political and psychological strengths of adversaries and enemies by employing ridicule as a standard operating tool of national strategy. Ridicule is an under-appreciated weapon not only against terrorists, but against weapons proliferators, despots, and international undesirables in general. Ridicule serves several purposes:
• Ridicule raises morale at home.
• Ridicule strips the enemy/adversary of his mystique and prestige.
• Ridicule erodes the enemy’s claim to justice.
• Ridicule eliminates the enemy’s image of invincibility.
• Directed properly at an enemy, ridicule can be a fate worse than death.

More precisely, it offers this applied context:

The Nazis and fascists required either adulation or fear; their leaders and their causes were vulnerable to well-aimed ridicule. […] Like many in Hollywood did at the time, the cartoon studios put their talent at the disposal of the war effort. Disney’s Donald Duck, in the 1942 short “Donald Duck In Nutziland” (retitled “Der Fuehrer’s Face”), won an Academy Award after the unhappy duck dreamed he was stuck in Nazi Germany.

And then it concludes with this suggestion:

U.S. policymakers must incorporate ridicule into their strategic thinking. Ridicule is a tool that they can use without trying to control. It exists naturally in its native environments in ways beneficial to the interests of the nation and cause of freedom. Its practitioners are natural allies, even if we do not always appreciate what they say or how they say it. The United States need do little more than give them publicity and play on its official and semi-official global radio, TV and Internet media, and help them become
“discovered.” And it should be relentless about it.

And for what it’s worth John Lenczowski, a National Security Council staffer under President Ronald Reagan, founded the IWP.

A modern and somewhat nuanced take on what this all means today is captured in a new talk by General Glen VanHerck, head of US Northern Command:

“Rather than primarily focusing on kinetic defeat, for the defense of the homeland, I think we must get further left,” VanHerck told an audience at the Space and Missile Defense Symposium. “Deterrence is establishing competition by using all levers of influence as I conveyed, and most importantly, the proper use of the information space to demonstrate the will, the capability, the resiliency, and the readiness by creating doubt in any potential adversaries mind that they can ever be successful by striking our homeland.”

Putting “doubt in any potential adversaries mind that they can ever be successful”… is to ridicule them, as Rommel found out the hard way when he quickly lost all his potential to be an adversary.