Deepfakes are Literally Security Theater

Source: Mashable “The bizarre world of Queen Elizabeth impersonators… LONDON — It’s tough making a living as a Queen Elizabeth impersonator. Not only do you have to master the dress, the wave and the pursed lips, but you also get thrown into endless ridiculous scenarios.”

Have you been to a theater lately? Probably not because of the pandemic, but if you remember when we all used to go (including movie theaters, of course) we would watch performance art and… like it (assuming it was well done and believable, of course).

However, I sure see a lot of people getting very upset about something they call Deepfakes.

Source: The Sun, which you definitely should trust.

Why is there such a disconnect between all the people paying money and spending time to be entertained by the performing arts (the act of information deception) and the people decrying our future will be ruined by Deepfakes (the act of information deception)?

I call this the chasm of information security, which I’ve been sounding the alarm on here and in my presentations around the world since at least 2012. It is the foundation of my new book, which I started writing at that time and has expanded greatly from just a warning call to tangible solutions.

We are long past the time when security professionals should have been talking about the dangers and controls of integrity risks. It is evidence of failure that people can both be entertained by information deception without any worry on one hand and on the other hand decry it as a dangerous future if we allow it to continue.

Is the court jester the end of the kingdom? Obviously not. Is the satirist or political comedian the end of the future? Obviously not.

When an actor changes their voice is it more or less concerning than when they change their appearance to look like the person they are attempting to represent accurately?

Watching a Deepfake for me is like going to the theater or watching a movie and I fear it very little, perhaps because I study intensely all the ways we can protect ourselves against willful harm.

Integrity is a problem, a HUGE problem. Yet let me ask instead why are people so worried that performance art, let alone all art, is being artistic?

A headline like this one is not concerning for me any more than usual:

A college kid’s fake, AI-generated blog fooled tens of thousands. This is how he made it.

“It was super easy actually,” he says, “which was the scary part.”

Yes, a college kid’s fake blog is called Wikipedia. Lots of people with free time on their hands generate fake content there and fool millions. This should not surprise anyone. Using technology to generate the content makes it faster and easier, sure, but it’s not far from the original problem.

The bigger problem is that people don’t often enough describe GPT-3 as a fire-spewing dumpster fire that was created without any sense of fire suppression. It’s a disaster.

Philosophers know this. They write academic papers about the kind of obvious classes of vulnerabilities that engineers should have been modeling from day one if not earlier. Here’s a good example of the kind of thing every security team needs to stick in their quiver:

Source: “Recommender systems and their ethical challenges”, Silvia Milano, Mariarosaria Taddeo & Luciano Floridi. AI and Society (4):957-967 (2020)

When I was in Japan trying to solve for information system risks I couldn’t raise insider attacks using the old and usual talking points because everyone there told me dryly that no such thing existed.

Their culture was explained to me as deeply ingrained trust and honor systems such that they confidently believed they could detect any deviations (and hard to argue given how they marched into the room and sat by rank and respect from middle to end of the table, only spoke when allowed).

So instead I watched a history documentary about how Osaka castles had been destroyed by invaders and the next meeting I brought up the dangers of fakes and imposters, deceptive identities inside their organization.

This hit a big nerve.

Suddenly everyone was waving money at me saying take it and help them protect against such imminent dangers. Why was a deep fake so motivating?

It is a massive failing of the security industry how people worry about data integrity and feel afraid like they have no tangible answers, yet they surround themselves with art all day every day and “like” it.

We may in fact have the answers to this failing, and right in front of us.

Again, that’s the chasm of information security today. I hope to explain in great detail what needs to be done about this fear of theater, in my upcoming book.

Kahneman Himself Clarifies Thinking Systems 1 and 2

Sometimes I am asked to review or explain a framework of thinking systems in terms of a very popular book by Nobel laureate and father of behavioral economics, Daniel Kahneman.

…human reason left to its own devices is apt to engage in a number of fallacies and systematic errors, so if we want to make better decisions in our personal lives and as a society, we ought to be aware of these biases and seek workarounds.

I suppose this comes up most when I describe the same things in many of my presentations, such as my last one given at RSAC SF:

RSAC SF 2020 Presentation on AI

My point usually has been that veterinarian science used this duality in thinking to solve Rinderpest (as I wrote here in 2010, a year before Kahneman’s very famous best-selling book was published).

And my point in describing the dual-system that solved Rinderpest, for such a huge accomplishment as ending a disease, has been that our security community maybe could do similar things to solve for integrity attacks on information systems. You say we have a problem with disinformation campaigns and I’ll say I have a possible solution!

Kahneman himself just gave a brief three minute presentation the other day in an “AI Debate”. He quickly starts off by admitting

…they’re not my idea but I wrote a book to describe them.

He then goes on to say his understanding of system one is that “things happen to you, you don’t do them”, calling them automatic and parallelized, whereas system two is “something you do” and serialized… all of which seems very consistent with my slides.

Again for clarity:
System 1) things that happen to you
System 2) things that you do

This is not only consistent with what I studied before his book was published, the split is of course NOT my idea either, as I’ve always said.

I have been writing a book to describe them, but it has been for the purposes of improving safety in engineering practices.

What is most interesting in his presentation is while he tells us that system one is “our world” it’s probably more accurate to say (by his own admission) that in system one we are seeing shadows on Plato’s cave wall, not the strings we pull.

The Death of Double-Agent George Blake

Few remember how America’s 31 May 1951 OPERATION STRANGLE in the Korean War…

…dropped 600K tons of bombs on DPRK and 2 million civilians perished. It had reverse effect of expected and cauterized resistance.

However, one person who definitely remembered was double-agent for the Soviet Union George Blake, one of the most well-known yet least connected stories to such “cauterized resistance”.

Blake emphasized to the press…

…that he decided to switch sides after seeing civilians massacred by the “American military machine.” “I realized back then that such conflicts are deadly dangerous for the entire humankind and made the most important decision in my life – to cooperate with Soviet intelligence voluntarily and for free to help protect peace in the world”.

Here’s another version of events:

…despatched to Seoul in 1950, to set up an anti-Soviet operation on Moscow’s eastern flank…the North Koreans invaded the South and Blake, like many other western diplomats, was interned – and during his three-year period of captivity he changed sides. George Blake was no “Manchurian Candidate”, tortured and brainwashed into working for the communists while a prisoner of war. it was, he insisted, the spectacle of a helpless civilian population being attacked by mighty US bombers that had changed his world-view: “It made me feel ashamed of belonging to these overpowering, technically superior countries fighting against what seemed to me quite defenceless people.” He quietly informed his KGB captors that he was ready to work for them. In 1953, Blake and his fellow detainees were at last released and he returned to London as an SIS hero.

This UK “hero” was then caught spying for the Soviets (due to a Polish intelligence officer).

The suspected spy was unmasked by a tip from a defecting Polish intelligence officer who told the CIA that two Soviet agents were operating in Britain, one at a royal navy research centre, the other in SIS. They were codenamed Lambda-1 and Lambda-2. Quickly, Lambda-1 was identified as Harry Houghton, but it was months before Blake, then on temporary assignment in Lebanon to learn Arabic, became the prime suspect for Lambda-2.

He confessed and pleaded guilty, was sentenced to a long jail term but soon escaped (with the help of Irish inmates perhaps enamored with Soviet life) from “maximum security” to the open arms of Russia where he continued to intentionally put hundreds of people in harms way.

Dozens are alleged to have been executed in Russia from his actions, and he denied responsibility for their lives while simultaneously taking credit and awards.

He has just died aged 98, feted by Russia.

Goebbels Never Said THAT!

Did you know Nazi minister of propaganda Goebbels, one of Hitlers closest men, said “The truth will always win”?

There’s been a problem on the Internet for a long time, as we all know, that data integrity gets ignored by security professionals. Cliff “Cuckoo Egg” Stoll in 1995 infamously warned us about this in “Why the Web Won’t Be Nirvana“, which everyone has basically ignored.

Sure people work on availability (howabout them nines!) and of course after 2003 the boom of documented huge privacy breaches have been lighting up news headlines and even board-level radar screens.

But — and it’s a very BIG but — integrity largely has been ignored.

People now repeatedly and freely post quotes and attributions that simply were never said, or fake pictures that were never taken (as I made light of several times here).

Yet show me a security team prepared and ready to do a correction on data and deal with sources disputing veracity. It was some kind of major problem to get Facebook to post warnings and moderate speech after how many years of obvious safety harms including atrocity crimes?

So what did Goebbels really say?

This is a natural environment for the historian. Which source to trust, what really happened and was said? That’s the heart of the mission for anyone claiming to understand and be able to explain history.

Now bring the typical security professional into such a fray and it’s like having a deer in headlights.

I’ve given talks about this disconnect in our industry for decades now. In several cases I’ve tried to illuminate how easy it is for security professionals to use low integrity themselves while talking about the importance of privacy.

The over-specialization in security actually has led to an even greater problem (e.g. integrity flaw risk increases dramatically as transparency decreases) few are willing to talk about either.

If you hear a CISO press 100% into encryption and not at all into issues of keeping data safe behind a lock and key, where they throw away the key, hold up one minute and think about what you’re doing.

Anyway, one good example is how Goebbels somehow has been attributed with saying “Truth is the enemy of the state” when in fact he said the opposite. No, seriously, Goebbels was a huge proponent of telling the truth.

Robert Khoury’s 1982 “The Sociology of the Offbeat” had a good way of describing it on page 337:

Goebbels’ moral position in the diary was straightforward: he told the truth, his enemies told lies. Actually the question for him was one of expediency and not morality. Truth, he thought, should be used as frequently as possible; otherwise the enemy or the facts themselves might expose falsehood, and the credibility of his own output would suffer. Germans, he also stated, had grown more sophisticated since 1914: they could “read between the lines” and hence could not be easily deceived.

Thus we can easily see Goebbels’ actual words in 1941 were that truth wins and the use of lies — such as what he observed the Allies to use — are stupid and will lose:

The astonishing thing is that Mr. Churchill, a genuine John Bull, holds to his lies, and in fact repeats them until he himself believes them.

Compare the truth of what Goebbels actually said to what people think he said, as documented in the German Propaganda Archive list of false Nazi Quotations where the most popular forgery of all time is this one:

If you tell a lie big enough and keep repeating it, people will eventually come to believe it. The lie can be maintained only for such time as the State can shield the people from the political, economic and or military consequences of the lie. It thus becomes vitally important for the State to use all of its powers to repress dissent, for the truth is the mortal enemy of the lie, and thus by extension, the truth is the greatest enemy of the State.

Goebbels never said THAT.

What Goebbels believed in, just to be clear, is “the ultimate victory of the truth”, explained by German professor of history Peter Longerich in a 2014 biography.

Source: “Goebbels : a biography” by Peter Longerich, New York: Random House, 2014.

Goebbels said THAT, and good luck getting take downs or corrections filed on all the pages to correct the record. Will the truth really win?

And speaking of Internet activism, guess who has been spreading Goebbels’ saying that truth will always win?

Yup. WikiLeaks has a Nazi propaganda minister reference as their byline. Ok, to be fair, a lot of people say this across the spectrum. Just imagine for a minute that Goebbels’ saying was correctly cited and known.

I mean imagine a future world (it may in fact be coming soon) where security professionals are working on how best to wade into this problem of integrity flaws. Too many have been acting for too long like the risk of Nazis deploying harms on every available platform is some kind of new thing or outside their expertise or domain…

Hitler was photographed with his Minister of Propaganda, Joseph Goebbels, and yet someone painstakingly removed the latter from the image.

Meanwhile actual attribution to the infamous statement should go to the poet Isabella Blagden in The Crown of a Life (1869):

If a lie is only printed often enough, it becomes a quasi-truth, and if such a truth is repeated often enough, it becomes an article of belief, a dogma, and men will die for it.