Category Archives: Poetry

Descartes on AI: I Think, Therefore I Am… Not a Machine

Keith Gunderson, a pioneering philosopher of robotics, in his 1964 paper called “Descartes, La Mettrie, Language and Machines” captured this Robert Stoothoff translation of the 1637 Discourse:

If there were machines which bore a resemblance to our bodies and imitated our actions as closely as possible for all practical purposes, we should still have two very certain means of recognizing that they were not real men. The first is that they could never use words, or put together signs, as we do in order to declare our thoughts to others. For we can certainly conceive of a machine so constructed that it utters words, and even utters words that correspond to bodily actions causing a change in its organs… but it is not conceivable that such a machine should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do. Secondly, even though some machines might do some things as well as we do them, or perhaps even better, they would inevitably fail in others, which would reveal that they are acting not from understanding, but only from the disposition of their organs. For whereas reason is a universal instrument, which can be used in all kinds of situations, these organs need some particular action; hence it is for all practical purposes impossible for a machine to have enough different organs to make it act in all the contingencies of life in the way in which our reason makes us act.

Here is another translation:

…if there were machines which had the organs and the external shape of a monkey or of some other animal without reason, we would have no way of recognizing that they were not exactly the same nature as the animals… The first of these is that they would never be able to use words or other signs to make words as we do to declare our thoughts to others. For one can easily imagine a machine made in such a way that it expresses words, even that it expresses some words relevant to some physical actions which bring about some change in its organs … but one cannot imagine a machine that arranges words in various ways to reply to the sense of everything said in its presence, as the most stupid human beings are capable of doing. The second test is that, although these machines might do several things as well or perhaps better than we do, they are inevitably lacking in some other, through which we discover that they act, not by knowledge, but only by the arrangement of their organs. For, whereas reason is a universal instrument which can serve in all sorts of encounters, these organs need some particular arrangement for each particular action. As a result of that, it is morally impossible that there is in a machine’s organs sufficient variety to act in all the events of our lives in the same way that our reason empowers us to act.

And another one:

Big Tech Admits Security Teams Politically Directed and Intentionally Blind to Hate Groups

My head hurt when I read a new “insider” article on detecting and preventing hate on big data platforms. It’s awful on many, many levels.

It’s like seeing a story on airplane safety in hostile territory where former staff reveal they couldn’t agree politically on how to measure gravity in a way that appeased a government telling them that up is down. Or hearing that a crash in 2018 made safety staff aware of flying risks — as if nothing ever crashed before a year or two ago.

Really? You just figured out domestic terrorism is a huge problem? That says a lot, a LOT. A Civil War was fought after decades of terrorism and it continued again after the war ended, and there’s a long rich history of multi-faceted orgs conspiring and collaborating to undermine democracy. And that’s just in America, with its documented history of violent transfer of power.

Quick chart by me of where and when fascism took hold in Europe.

I’m not going to give away any insider secrets when I say this new article provides some shockingly awful admissions of guilt from tech companies that facilitated mass harms from hate groups and allowed the problem to get far worse (while claiming success in making it better).

Here’s a quick sample:

…companies defined hate in limited ways. Facebook, Twitter and YouTube have all introduced hate speech policies that generally prohibit direct attacks on the basis of specific categories like race or sexual orientation. But what to do with a new conspiracy theory like QAnon that hinges on some imagined belief in a cabal of Satan-worshipping Democratic pedophiles? Or a group of self-proclaimed “Western chauvinists” like the Proud Boys cloaking themselves in the illusion that white pride doesn’t necessarily require racial animus? Or the #StoptheSteal groups, which were based on a lie, propagated by the former president of the United States, that the election had been stolen? These movements were shot through with hate and violence, but initially, they didn’t fit neatly into any of the companies’ definitions. And those companies, operating in a fraught political environment, were in turn slow to admit, at least publicly, that their definitions needed to change.

“Proud Boys cloaking themselves” is about as sensible a phrase as loud boys silencing themselves. Everyone knows “proud boys”, like other hate groups, very purposefully use signaling to identify themselves, right? (Hint: “men who refuse to apologize” with frequent use of prominent Proud Boy logos and the colors black and yellow)

Both the ADL and SPLC have databases easily referenced for the latest on signal decoding, not to mention the many posts I’ve written here

Limited ways used to define hate (reduced monitoring) were meant to benefit who, exactly, and why was that the starting point anyway? Did any utility ever start with “defined pollutants in limited ways” for the benefit of people drinking water? Here’s a hint from Michigan and a very good way to look at the benefit from an appropriately wide definition of harms:

This case has nothing whatsoever to do with partisanship. It has to do with human decency, resurrecting the complete abandonment of the people of Flint and finally, finally holding people accountable for their alleged unspeakable atrocities…

It should not be seen as a political act to stop extremist groups (despite them falsely claiming to be political actors — in reality they aim to destroy politics).

Who really advocated starting with the most limited definition of hate, a definition ostensibly ignorant of basic science and history of harm prevention (ounce of prevention, pound of cure, etc)?

In other words “movements were shot through with hate and violence” yet companies say they were stuck in a worry mode about “what to do” with rising hate and violence on their watch — as if shutting it down wasn’t an obvious answer. They saw advantages to themselves of not doing anything about harms done to others… as proof of what moral principles, exactly?

Should be obvious without a history degree why it’s a dangerous disconnect to say you observe imminent, immediate, potential for harms yet stand idly by asking yourself whether it would be bad to help the people you “serve” avoid being harmed. Is a bully harmed if they can’t bully? No.

The article indeed brings up an inversion of care, where shutting down hate groups risked tech workers facing threats of attack. It seems to suggest it made them want to give into the bully tactics and preserve their own safety at the cost of others being hurt; instead it should have confirmed that they were on the right path and in a better position to be shutting bullies down so that others wouldn’t suffer the same threats (service to others instead of just self).

Indeed, what good is it to say hate speech policies prohibit direct attacks if movements full of hate and violence haven’t “direct attacked” someone yet? You’re not really prohibiting, are you? It’s like saying you prohibit plane crashes but the plane hasn’t crashed yet so you can’t stop a plane from crashing. A report from Mozilla Foundation confirms this problem:

While we may never know if this disinformation campaign would have been successful if Facebook and other platforms had acted earlier, there were clearly measures the platforms could have taken sooner to limit the reach and growth of election disinformation. Platforms were generally reactive rather than proactive.

Seriously. That’s not prohibiting attacks, that barely rises to even detecting them.

Kind of like asking what if you hear a pilot in the air say “gravity is a lie, a Democratic conspiracy…” instead of hearing the pilot say “I hate the people in America so this plane is going to crash into a building and kill people”.

Is it really a big puzzle whether to intervene in both scenarios as early as possible?

I guess some people think you have to wait for the crash and then react by saying your policy was to prohibit the crash. Those people shouldn’t be in charge of other people’s safety. Nobody should sit comfortably if they say “hey, we could and should have stopped all that harm, but oops let’s react now!”

How does the old saying go…”never again, unless a definition is hard”? Sounds about right for these tech companies.

What they really seem to be revealing is an attitude of “please don’t hold me responsible for wanting to be liked by everyone, or for wanting an easier job” and then leaving the harms to grow.

You can’t make this stuff up.

And we know what happens when tech staff are so cozy and lazy that they refuse to stop harms, obsessing about keeping themselves liked and avoiding hard work of finding flaws early and working to fix them.

The problem grows dramatically, getting significantly harder. It’s the most basic history lesson of all in security.

FBI director says domestic terrorism ‘metastasizing’ throughout U.S. as cases soar

Perhaps most telling of all is that people comforted themselves with fallacies as a reason for inaction. If they did something, they reasoned falsely, it could turn into anything. Therefore they chose to do nothing for a long while, which facilitated atrocities, until they couldn’t ignore it any longer.

Here’s another excerpt from the article:

Inside YouTube, one former employee who has worked on policy issues for a number of tech giants said people were beginning to discuss doing just that. But questions about the slippery slope slowed them down. “You start doing it for this, then everybody’s going to ask you to do it for everything else. Where do you draw the line there? What is OK and what’s not?” the former employee said, recalling those discussions.

Slippery slope is a fallacy. You’re supposed to say “hey, that’s a fallacy, and illogical so we can quickly move on” as opposed to sitting on your hands. It would be like someone saying “here’s a strawman” and then YouTube staff disclose how their highly-paid long-term discussions stayed centered on how they must defeat a strawman and ignored an actual issue.

That is not how fallacies are to be handled. Dare I say, “where do you draw the line” is evidence the people meant to deal with an issue are completely off-base if they can’t handle a simple fallacy straight away and say “HERE, RIGHT HERE. THIS IS WHERE WE DRAW THE LINE” because slippery slope is a fallacy!

After all, if the slippery slope were a real thing instead of a fallacy we should turn off YouTube entirely right now, SHUT IT DOWN, because if you watch one video on fluffy kittens next thing you know you’re eyeballs deep into KKK training videos. See what I mean? The fallacy is not even worth another minute to consider, yet somehow “tech giant” policy person is stuck charging high rates to think about it for a long while.

If slippery slope were an actual logical concern, YouTube would have to cease to exist immediately. It couldn’t show any video ever.

And the following excerpt from the same article pretty much sums up how Facebook is full of intentional hot air — they’re asking for money as ad targeting geniuses yet somehow go completely blind (irresponsible) when the targeting topic includes hate and violence:

“Why are they so good at targeting you with content that’s consistent with your prior engagement, but somehow when it comes to harm, they become bumbling idiots?” asked Farid, who remains dubious of Big Tech’s efforts to control violent extremists. “You can’t have it both ways.” Facebook, for one, recently said it would stop recommending political and civic groups to its users, after reportedly finding that the vast majority of them included hate, misinformation or calls to violence leading up to the 2020 election.

Vast majority of Facebook “civic groups” included hate, misinformation or calls to violence. That was no accident. The Mozilla Foundation report, while pointing out deepfakes were a non-threat, frames willful inaction of Facebook staff like this:

Despite Facebook’s awareness of the fact that its group recommendations feature was a significant factor in growing extremist groups on its platform, it did little to address the problem.

Maybe I can go out on a limb here and give a simple explanation, borrowed from psychologists who research how people respond to uncomfortable truths:

In seeking resolution, our primary goal is to preserve our sense of self-value. …dissonance-primed subjects looked surprised, even incredulous [and] discounted what they could see right in front of them, in order to remain in conformity with the group…

Facebook staff may just be such white American elitists, that they’re in full self-value preservation mode and discount the hate they see right in front of them to remain in conformity with… hate groups.

So let me end on a rather chillingly accurate essay from a philosopher in 1963, Hannah Arendt, explaining why it is the banality of evil that makes it so dangerous to humanity.

[Evil] possesses neither depth nor any demonic dimension yet–and this is its horror–it can spread like a fungus over the surface of the earth and lay waste the entire world. Evil comes from a failure to think.

Now compare that to a quote in the article from someone “surprised” to find out the KKK are nice people. If anything, history tells us exactly this point over and over and over again, yet somehow it was news to the big tech expert on hate groups. Again from the article:

In late 2019, Green traveled to Tennessee and Alabama to meet with people who believe in a range of conspiracy theories, from flat earthers to Sandy Hook deniers. “I went into the field with a strong hypothesis that I know which conspiracy theories are violent and which aren’t,” Green said. But the research surprised her, as some conspiracy theorists she believed to be innocuous, like flat earthers, were far more militant followers than the ones she considered violent, like people who believed in white genocide. “We spoke to flat earthers who could tell you which NASA scientists are propagating a world view and what they would do to them if they could,” Green said. Even more challenging: Of the 77 conspiracy theorists Green’s team interviewed, there wasn’t a single person who believed in only one conspiracy. That makes mapping out the scope of the threat much more complex than fixating on a single group.

For me this is like reading Green discovered water is wet. No, really, water turned out to be wet but Green didn’t know it until went to Tennessee and Alabama and put a finger in the water there. Spent a lot of money on travel. Discovered water is wet, also that white genocide is a deeply embedded systemic silent killer in America rather than an unpolished and loud one… and people with a cognitive vulnerability and easily manipulated are… wait for it… very vulnerable and easily manipulated. What a 2019 revelation!

Please excuse the frustration. Those who study history are condemned to watch people repeat it. In military history terms, here’s what we know is happening today in information warfare just like it has many times before:

For Russia, a core tenet of successful information operations is to be at war with the United States, without Americans even knowing it (and the Kremlin can and does persistently deny it).

Seemingly good folks, even those lacking urgency, can quickly do horrible things by failing to take a stand against wrongs. We know this, right? It is the seemingly “nice” people who can be the most dangerous because they normalize hate and allow it to be integrated into daily routines, systemically delivering evil as though it is anything but that (requiring a science of ethics to detect and prevent it).

From the Women of the Ku Klux Klan, who reinvigorated white supremacy in the 1920s… Genocide is women’s business.

Can physics detect up versus down? Yes. Can ethics detect right versus wrong? Yes. Science.

White supremacy is a blatant lie, yet big tech allows it to spread as a silent killer in America.

When those of us building AI systems continue to allow the blatant lie of white supremacy to be embedded in everything from how we collect data to how we define data sets and how we choose to use them, it signifies a disturbing tolerance…. Data sets so specifically built in and for white spaces represent the constructed reality, not the natural one.

Putting Woodrow Wilson in the White House (an historic white space, literally named to keep black Americans out of it) was a far worse step than any amateur hate group flailing loudly about their immediate and angry plans. In fact the latter is often used by the former as a reason for them to be put in power yet they can just normalize the hate and violence (e.g. Woodrow Wilson claimed to be defending the country while he in fact was idly allowing domestic terrorism and “wholesale murder” of Americans under the “America First” platform).

Wilson’s 1915 launch of America First to restart the KKK always has been a very clear hate signal, an extremist group, and yet even today we see it flourish on big tech as if something is blinding their counter-terrorism experts from a simple take-down.

Interesting tangent from history: the January 1917 telegram intercepted and decoded by British warning the Americans of a German plot to invade via Mexico…was actually over American communication lines. The Americans claimed to not care what messages were on their lines, so the British delicately had to intercept and expose impending enemy threats to America that were transiting on American lines yet ignored by Americans. History repeats, amiright?

Thus if big tech can say the know how to ban the KKK when they see it, why aren’t they banning America First? The two are literally the same ffffffing thing and it always been that way! How many times do historians have to say this for someone in a tech policy job to get it?

Hello Twitter can you see the tweets on Twitter about this… from years ago?

Is this thing on?

Here’s a new chart of violent white nationalist content (America First) continuing to spread on Twitter as a perfect very recent example.

Meltwater social media analytics for text mentions of “AFPAC” and “America First Political Action Conference” across the web from January 26 to February 26 found that discussion of the event primarily took place on Twitter and on forums like 4chan. This data does not differentiate between positive, neutral, and negative discussions. (Source: @jaredlholt/DFRLab via Meltwater Explore)

Big tech staff are very clearly exhibiting a failure to think (to put it in terms of Arendt’s clear 1963 warning).

Never mind all the self-congratulatory “we’re making progress” marketing. Flint Michigan didn’t get to say “hey, where’s our credit for other stuff we filtered out” when people reviewed fatalities from lead poisoning. Flint Michigan also doesn’t get to say “we were going to remove poison but then we got stuck on a slippery slope topic and decided to let the poison flow as we got paid the same to do nothing about harms.”

Big Tech shouldn’t get a pass here on very well documented harms and obviously bad response. Let’s be honest, criminal charges shouldn’t be out of the question.

X-Rays Defeat LetterLocking: Secrets Exposed of Ancient Folded Papers

A new paper in Nature says they have an algorithm that can read tightly folded letters without opening them physically.

The challenge tackled here is to reconstruct the intricate folds, tucks, and slits of unopened letters secured shut with “letterlocking,” a practice—systematized in this paper—which underpinned global communications security for centuries before modern envelopes.

It makes the bold case that these tight folds from letters 300 years ago should be considered an historic link to modern cryptography.

Source: Nature, letterlocking examples from the Brienne Collection.
From: Unlocking history through automated virtual unfolding of sealed documents imaged by X-ray microtomography

Letterlocking was an everyday activity for centuries, across cultures, borders, and social classes, and plays an integral role in the history of secrecy systems as the missing link between physical communications security techniques from the ancient world and modern digital cryptography.

I have to say I disagree with this “missing link” comment. Cryptography doesn’t seem to come into it, as there is no decipher key to unlock them unless you stretch a definition to include unfolding.

A more obvious link from these letterlock examples to modern methods would be… the modern letterlock.

Letterlocking: Aerogramme, United States Postal Service (1995) from letterlocking on Vimeo.

I suppose it’s important to say envelopes were an 1800s innovation in secrecy by providing an, ahem, envelope. Aerogrammes are ostensibly less safe than putting one in an envelope, even though an attacks on either one are basically the same — unlock, unfold, read.

That is why I say a “locking” fold of paper without an envelope doesn’t make a direct link to modern encryption. I mean encryption also existed in letters for many centuries (as I’ve written here before), separate from how the letters were folded.

For example, here is a German message intercepted in 1918 by British operator in Basra after liberating Iraq.

The bottom note says “2 letters missed thro machine gun jam”, which I suppose would be comparable to the “wormholes” in lockletter unfolding. But unlike lockletters, which can be read once unfolded, this text still lacks a key.

For another example here’s an old slide I made to show how the key in a 16th century “cardan grille system” (early steganography) was used during the American Revolution:

Tom Cruise is a Fake. For Real This Time.

Why are some fakes seen as ok and others are “scary”? Hint: agency and power in voice.

How a movie character is written or portrayed influences a viewer’s impression, which can in turn influence people’s stereotypes on gender norms.

More to the point:

White men tend to only listen to other white men. They will occasionally listen to a white woman.

Something I’ve always known about Tom Cruise is that he is a rich white man who made his fortunes by becoming “fake” and assuming the identity of others. Literally. He is a paid actor, who makes a living from impersonation so it should be fair to say he is a highly celebrated faker.

Here’s a helpful chart of privilege suggested by Eugenia Cheng in her tool talk about “understanding inequality”.

Source: “An unexpected tool for understanding inequality: abstract math”

Perhaps we could adapt that chart to one of trust, particularly as it applies to someone presenting themselves with attributes (rich-white-male) that supposedly project integrity in their message delivery.

Tom Cruise is so highly paid since his fakes are received as valued (e.g. entertaining, informative) instead of threatening, and also because of an odd form of acceptance of his reality. People in fact think he’s both tall and well dressed (expected of rich white men yet neither are true — sophisticated teams give him that appearance).

Now comes an article with a stark warning that evidence has been found of Tom Cruise, the fake, being faked.

Deepfake videos of Tom Cruise show the technology’s threat to society is very real: We’re entering scary times.

Scary? Entering scary times? Have you seen this from 1986, the true hey-day of cyber hacking?

WARNING: DESPITE THIS CONVINCING VIDEO IN CIRCULATION, TOM CRUISE WAS NOT IN THE NAVY, DID NOT FLY JETS …ETC ETC

Videos of Tom Cruise have showed since at least the 1980s technology’s threat to society by allowing Tom Cruise to be a fake.

Everyone needs to ask themselves whether Tom Cruise is a threat to society since he is an actor, makes a living being a fake? Think about it. How often have you really seen a real Tom Cruise? Ever?

Incidentally, here was my take several years ago on that movie poster of Tom Cruise showing that anyone these days can make a fake of anything using technology. Admittedly it DID NOT age well.

Original artwork by me.

And if you are wondering how you can reliably detect that my image is a fake, unlike the original image of Tom Cruise (also a fake), then just look very, very closely at the eyes.

In a real photo or video, the reflections on the eyes would generally appear to be the same shape and color. However, most images generated by artificial intelligence — including generative adversary network (GAN) images — fail to accurately or consistently do this, possibly due to many photos combined to generate the fake image.

I mean how to tell aside from the fact that RMS is the known founder of Free Software Foundation (FSF) and GNU is Not Unix (GNU) and obviously would never fit into a flight suit.

We dispense shame and hate on all the “paparazzi” who violate his privacy and dare to expose a real Tom Cruise (e.g. how short and badly dressed he is), yet laud all his fakery that he thrives from.

The alarmist article doesn’t bother to address such a very important and simple problem with its analysis.

It all begs the question of why should we be comfortable and trust a fake like Tom Cruise up until now, but then worry about someone else making a fake of his fakes?

In other words, why should we trust Tom Cruise being the only responsible fake, more than someone who is faking Tom Cruise being a fake?

If we could achieve trust of one fake (a Scientologist of all things, who peddles in fake beliefs), why not achieve trust in the fake of that fake? Or maybe another way of asking it is who really is scared by a world where a Tom Cruise fakes being tall, or fakes being a Navy pilot?

Some may claim to be “scared” by the idea of agency and voice being held by those not in power. That is what this really is about.

Someone who doesn’t appear physically to be Tom Cruise (a non-white, non-male) now may be able to attain the same power of influence that used to be reserved only for Tom Cruise (thanks to technology, just like the technology Tom Cruise used to appear taller than he is).

Imagine a black woman putting her words into the mouth of Tom Cruise and nobody detecting that it really is a black woman’s ideas. SO SCARY!

It’s about power. Why is power scary?

In reality, this kind of fear mongering with technology goes back to the turn of the century when machines put human faces on and people started experimenting with the idea of robots and inauthentic presence enabled by machines.

And even more importantly it takes us back to the first publication by Wollstonecraft (1790 Vindication of the Rights of Men) being extremely popular while she remained anonymous, yet her second publication under her real name was shunned because… the author admitted to being a woman. If only she could have published her brilliant works as a Tom Cruise video, right?

Also, to be fair, Tom Cruise is someone who battled with perception his whole life and made a career out of presenting a different vision than others were assigning to him.

He overcame obstacles and transformed his own physical appearance from something that he was ashamed of into an unbelievable physical representation, thus mastering the art of a fake.

People celebrate his achievement of fake Tom Cruise, so perhaps we should do the same celebrations for achievement of fake fake Tom Cruise.

I’ve written about all this security theater before, with regard to people faking the Queen of England. I write about it because I continue to find it amusing how it is a security topic that is literally about theater, yet nobody seems to admit the huge irony.

Additional food for thought: Americans have been spreading loads of fake traitor General Lee art after the Civil War (back to my point about industrialization-era fakes), not to mention American image manipulation going back to President Lincoln’s time (his portrait was a politicized fake — his head mounted on the body of someone opposed to freedom).

Putting up a statue of Lee is about the same thing as if Americans went about erecting monuments to Osama bin Laden after 9/11. Show me the outrage about statues of Lee before we think someone faking the fake Tom Cruise is a top concern. In fact, for all the deepfake art being generated using old photographs, it’s about time someone animated Lee’s statues with his own authentic words asking his followers to never put up statues of him.

Talk about scary fakes.

If anyone thinks it is “scary” now that Americans believe something is real that instead has been entirely faked… have I got some very real news about frightening times we’ve been in for over 100 years!

1860 Deepfake Machine

And on that note, who wouldn’t rather hear the weather report from a cat?

Source: WLOS

Update March 5: Vice has investigated the source of fakes of the fake Tom Cruise, and found it’s a sophisticated operation using professional actors!

The Tom Cruise TikTok videos required not only the expertise of Ume and his team but also the cooperation of Miles Fisher, a well-known Tom Cruise impersonator who was behind a viral video in 2019 that purported to show Cruise announcing his candidacy for the 2020 election. […] Ume has even detailed some of the highly complex and involved technical processes he had to go through to produce previous deepfakes. So, while the Tom Cruise TikTok videos that went viral last week may look like they were created in minutes, the reality is that they took a lot of time, technical expertise, and the skilled performance of a real actor.

If this is good news for anyone, that it takes a huge professional team including an actor to fake another actor, then the fears are being validated as about power and barrier of entry being lowered by technology.

And I would argue that the economics of a lower barrier to entry means regulation, let alone social norms of use, should kick in the same way as ever because artistic fakes are nothing new. Even the media hasn’t changed here so there’s literally nothing new except the idea that more people can do what already has been done for centuries if not longer.

Lattice of pseudonyms. Source: A terminology for talking about privacy by data minimization, 2010