Testing Things the “Wrong Way” is the Right Way to Test

Yesterday I have a talk at ISACA-SF where I repeatedly emphasized how AI auditing is about testing things in a way that breaks them.

This shouldn’t be news to anyone used to testing things, and yet many of the platforms somehow are trying to respond to algorithm failure by telling people to stop the tests.

I documented in my talk Amazon and Tesla doing it especially plainly, showing that their preferred response to security flaws is for people to stop testing for them. It’s like the 1980s all over again, despite bug bounties and stunt hacking having become so popular.

Here’s a perfect example from Facebook.

In 2017, I got fed up. I filmed a little experiment with the now-co-host of my podcast, Luke Bailey. We made a brand new Facebook account and I spent the week manually liking conservative Facebook pages and then every subsequent page the platform recommended for me. The Right-wing Ryan radicalized and hard. My feed jumped from normal Republican content to creepy boomer posts about sexy women to Alex Jones posts within a week.

Facebook was very mad about this! Their response was, at the time, the most aggressive they had ever been with me: “This isn’t an experiment; it’s a stunt. It isn’t how people set up or use Facebook, and suggesting so is misleading.”

I should also point out in 2017 a researcher reporting a vulnerability would have expected a massive bug bounty payment in an infamous reward system of Facebook. However, in this story the security failure was so bad, the vulnerability so deep, Facebook security responded with the opposite — they told the researcher to stop doing things in ways that prove a systemic lack of safety on the site related to business logic flaws (BLF).

Simple Guide to Regulating Social Media: How to Breakup Facebook

Separating communication and contents is like saying the water utility shouldn’t be in the business of turning your taps into coke machines. That’s the whole thing in a nutshell.

Status (like money, ideology and ego) is power, which is a question of authorization and consent. Very different from generic content.

Nobody should want 1950s “Mad Men” of advertising agencies to own and run all the communications infrastructure in the United States. It’s like saying nobody should want tobacco companies peddling literally cancerous “social entry” messages to be in charge of actual social entry requirements (e.g. you must smoke to enter).

Likewise, nobody should want the 1930s “America First” of media empires to own and run all the communications infrastructure in the United States.

This is a very different model from the plain delivery of information, which may or may not carry status and power-changing content. I’ve written about this many, many times here. The Carterfone fight with AT&T is a perfect example of this fight in the 1960s, since it centered on harm with regard to delivering content only; nothing to do with the content itself (Carter wanted to wirelessly receive calls while he rode a horse on his ranch, simply by adding a radio extender to his phone).

It was at this point the government split service providers from the hardware devices being connected to them, which unleashed the entire Internet by allowing modem and fax markets to be born.

America has a long tortured history in this aspect of regulation of communication, such as during the Andrew Jackson administration when he pushed “gag rules” and aggressively sought to intercept mail to censor abolitionist speech, including arresting sailors at ports to confiscate their books, imprison and torture them into disclosing social contacts.

History thus should be helpful in charting the course ahead.

It warns us plainly how decoupling infrastructure ownership from the tangled power struggles over its content (e.g. measures of benefits and harms) is what delivered far safer and better technology-driven market for ideas, especially because it reduced threat of monopolization by private entities’ harm-based business model.

Woodrow Wilson nationalizing infrastructure set off alarm bells for good reason, given he had just restarted the KKK from inside the White House. Yet at least within government the evil gag rules and inspection of mail, or the U.S. nationalization of its wires, these orders could be repealed. What option is there in monopolization such that the private company runs the government?

Thus when people ask what is to be done about the long documented and discussed harms of Facebook, the answer has always been somewhat obviously government regulation to remove those profiting from pollution from owning the plumbing too. Break these two incompatible halves apart immediately (applying criminal charges where relevant).

Explicitly deny public infrastructure providers from running harm for profit schemes.

In related news, the Swiss government has split service providers from the software devices that are being connected to them:

…providers of chat, instant messaging, video conferencing, or Voice over IP (VoIP) services, such as WhatsApp, iMessage, Zoom, Teams, and Skype cannot be classified as telecom service providers, but rather “over-the-top” (OTT) service providers.

You should be able to dump a chat application (and its toxic contents) without having to lose connectivity entirely.

In also related news, American “Big Tech” is feverishly attempting to create monopolies where none should exist.

…the very tech companies pushing this idea stand to profit from it, because the national hub would likely be housed in the same companies’ commercial cloud computing services. …little more than a cash grab by what’s effectively the next generation of military contractors. The plan also could entrench the very same tech companies that President Joe Biden’s antitrust enforcers are working to rein in, these critics say.

How Not to Regulate Disinformation

Cigarettes famously were regulated to have very stern warnings on them to counter the disinformation of their manufacturers. Here’s just a sample from the FDA of the kind of messaging I’m talking about:

Source: FDA “Proposed Cigarette Health Warnings”

That’s the right way to regulate disinformation because it’s a harms-based approach. If you follow the wrong path, you suffer a lot and then die. Choose wisely.

It’s like saying if you point a gun at your head and pull the trigger it will seriously hurt you and very likely kill you. Suicide is immoral. Likewise, when someone refuses the COVID19 vaccine they are putting themselves, as well as those around them (like smoking), at great risk of injury and death.

However, I still see regulators doing the wrong thing and trying to create a sense of “authenticity” in messaging instead of focusing on speech in context of harm.

Take the government Singapore, for example, which has this to say:

The Singapore-based website Truth Warriors falsely claims that coronavirus vaccines are not safe or effective — and now it will have to carry a correction on the top of each page alerting readers to the falsehoods it propagates.

Under Singapore’s “fake news” law — formally called the Protection from Online Falsehoods and Manipulation Act — the website must carry a notice to readers that it contains “false statement of fact,” the Health Ministry said Sunday. A criminal investigation is also underway.

Calling something “false statement of fact” doesn’t change the fact that falsehoods are NOT inherently bad. Even worse, over-emphasis on forced authenticity can itself be harmful (denying someone privacy, for example, by demanding they reveal a secret).

Thus this style of poorly-constructed “authenticity” regulation could be a mistake for a number of important safety reasons, not least because it can seriously backfire.

It would be like the government requiring The Onion or the Duffelblog to have a splash page announcing fake information (let alone the comedian industry as a whole).

Just take a quick look at how The Onion is reporting COVID19 lately:

Source: The Onion, “Man Who Posted ‘We Can All Get Through This Together’ Kicked Off Social Media For Spreading Covid-19 Hoax”

See what I did there? Calling out something false in fact could drive more people to reading it (popularizing things by unintentionally creating a nudge/seduction towards salacious “contraband”).

Indeed, it seems the Duffelblog already has created just such a warning voluntarily because it probably found too many people unwittingly acting upon it as truthful.

And that begs the questions, does the warning actually increase readership, and has there been harm reduction from the warning (e.g. was there any harm to begin with)?

Maybe this warning was regulated… it reminds me of earlier this year when GOP members of Congress (Rep. Pat Fallon, R-Texas) exposed themselves to the military as completely unable to tell facts from fiction.

Source: Twitter

I have a feeling someone in Texas government demanded Duffelblog do something to prevent Texans from entering the site, instead of educating Texans properly on how to tell fact from fiction (e.g. regulating harms through accountability for them).

Putting a gun to your head (refusing vaccine) is a suicidal move (immoral) and accountability for encouraging suicide is illustrated best through regulation that clearly documents harms.