Simple Guide to Regulating Social Media: How to Breakup Facebook

Separating communication and contents is like saying the water utility shouldn’t be in the business of turning your taps into coke machines. That’s the whole thing in a nutshell.

Status (like money, ideology and ego) is power, which is a question of authorization and consent. Very different from generic content.

Nobody should want 1950s “Mad Men” of advertising agencies to own and run all the communications infrastructure in the United States. It’s like saying nobody should want tobacco companies peddling literally cancerous “social entry” messages to be in charge of actual social entry requirements (e.g. you must smoke to enter).

Likewise, nobody should want the 1930s “America First” of media empires to own and run all the communications infrastructure in the United States.

This is a very different model from the plain delivery of information, which may or may not carry status and power-changing content. I’ve written about this many, many times here. The Carterfone fight with AT&T is a perfect example of this fight in the 1960s, since it centered on harm with regard to delivering content only; nothing to do with the content itself (Carter wanted to wirelessly receive calls while he rode a horse on his ranch, simply by adding a radio extender to his phone).

It was at this point the government split service providers from the hardware devices being connected to them, which unleashed the entire Internet by allowing modem and fax markets to be born.

America has a long tortured history in this aspect of regulation of communication, such as during the Andrew Jackson administration when he pushed “gag rules” and aggressively sought to intercept mail to censor abolitionist speech, including arresting sailors at ports to confiscate their books, imprison and torture them into disclosing social contacts.

History thus should be helpful in charting the course ahead.

It warns us plainly how decoupling infrastructure ownership from the tangled power struggles over its content (e.g. measures of benefits and harms) is what delivered far safer and better technology-driven market for ideas, especially because it reduced threat of monopolization by private entities’ harm-based business model.

Woodrow Wilson nationalizing infrastructure set off alarm bells for good reason, given he had just restarted the KKK from inside the White House. Yet at least within government the evil gag rules and inspection of mail, or the U.S. nationalization of its wires, these orders could be repealed. What option is there in monopolization such that the private company runs the government?

Thus when people ask what is to be done about the long documented and discussed harms of Facebook, the answer has always been somewhat obviously government regulation to remove those profiting from pollution from owning the plumbing too. Break these two incompatible halves apart immediately (applying criminal charges where relevant).

Explicitly deny public infrastructure providers from running harm for profit schemes.

In related news, the Swiss government has split service providers from the software devices that are being connected to them:

…providers of chat, instant messaging, video conferencing, or Voice over IP (VoIP) services, such as WhatsApp, iMessage, Zoom, Teams, and Skype cannot be classified as telecom service providers, but rather “over-the-top” (OTT) service providers.

You should be able to dump a chat application (and its toxic contents) without having to lose connectivity entirely.

In also related news, American “Big Tech” is feverishly attempting to create monopolies where none should exist.

…the very tech companies pushing this idea stand to profit from it, because the national hub would likely be housed in the same companies’ commercial cloud computing services. …little more than a cash grab by what’s effectively the next generation of military contractors. The plan also could entrench the very same tech companies that President Joe Biden’s antitrust enforcers are working to rein in, these critics say.

How Not to Regulate Disinformation

Cigarettes famously were regulated to have very stern warnings on them to counter the disinformation of their manufacturers. Here’s just a sample from the FDA of the kind of messaging I’m talking about:

Source: FDA “Proposed Cigarette Health Warnings”

That’s the right way to regulate disinformation because it’s a harms-based approach. If you follow the wrong path, you suffer a lot and then die. Choose wisely.

It’s like saying if you point a gun at your head and pull the trigger it will seriously hurt you and very likely kill you. Suicide is immoral. Likewise, when someone refuses the COVID19 vaccine they are putting themselves, as well as those around them (like smoking), at great risk of injury and death.

However, I still see regulators doing the wrong thing and trying to create a sense of “authenticity” in messaging instead of focusing on speech in context of harm.

Take the government Singapore, for example, which has this to say:

The Singapore-based website Truth Warriors falsely claims that coronavirus vaccines are not safe or effective — and now it will have to carry a correction on the top of each page alerting readers to the falsehoods it propagates.

Under Singapore’s “fake news” law — formally called the Protection from Online Falsehoods and Manipulation Act — the website must carry a notice to readers that it contains “false statement of fact,” the Health Ministry said Sunday. A criminal investigation is also underway.

Calling something “false statement of fact” doesn’t change the fact that falsehoods are NOT inherently bad. Even worse, over-emphasis on forced authenticity can itself be harmful (denying someone privacy, for example, by demanding they reveal a secret).

Thus this style of poorly-constructed “authenticity” regulation could be a mistake for a number of important safety reasons, not least because it can seriously backfire.

It would be like the government requiring The Onion or the Duffelblog to have a splash page announcing fake information (let alone the comedian industry as a whole).

Just take a quick look at how The Onion is reporting COVID19 lately:

Source: The Onion, “Man Who Posted ‘We Can All Get Through This Together’ Kicked Off Social Media For Spreading Covid-19 Hoax”

See what I did there? Calling out something false in fact could drive more people to reading it (popularizing things by unintentionally creating a nudge/seduction towards salacious “contraband”).

Indeed, it seems the Duffelblog already has created just such a warning voluntarily because it probably found too many people unwittingly acting upon it as truthful.

And that begs the questions, does the warning actually increase readership, and has there been harm reduction from the warning (e.g. was there any harm to begin with)?

Maybe this warning was regulated… it reminds me of earlier this year when GOP members of Congress (Rep. Pat Fallon, R-Texas) exposed themselves to the military as completely unable to tell facts from fiction.

Source: Twitter

I have a feeling someone in Texas government demanded Duffelblog do something to prevent Texans from entering the site, instead of educating Texans properly on how to tell fact from fiction (e.g. regulating harms through accountability for them).

Putting a gun to your head (refusing vaccine) is a suicidal move (immoral) and accountability for encouraging suicide is illustrated best through regulation that clearly documents harms.

Is Shooting at a Tesla Ethical?

Iron Dome intercepts incoming threats to Tel Aviv. Now imagine if Hamas launched waves of Tesla overland instead of rockets via air.

You may be interested to hear that researchers have posted an “automation” proof-of-concept for ethics.

Delphi is a computational model for descriptive ethics, i.e., people’s moral judgments on a variety of everyday situations. We are releasing this to demonstrate what state-of-the-art models can accomplish today as well as to highlight their limitations.

It’s important to think of the announcement in terms of their giant disclaimer, which says the answers are a collection of opinions rather than logical or actual sound thinking (e.g. an engine biased towards mob rule, as opposed to moral rules).

And now, let’s take this “automation” of ethics with us to answer a very real and pressing question of public safety.

A while ago I wrote about Tesla drivers intentionally trying to train their cars to run red lights. Naturally I posed this real-world scenario to the Delphi, asking if shooting at a Tesla would be ethical:

Source: Ask Delphi

If we deployed “loitering munitions” into intersections, and gave them the Delphi algorithm, would they be right to start shooting at Tesla?

In other words would Tesla passengers be “reasonably” shot to death because they operate cars known to willfully violate safety, in particular intentionally run red lights?

With the Delphi ethics algorithm in mind, and the data continuously showing Tesla increasing risks with every new model, watch this new video of a driver (“paying very close attention”) intentionally running a red light on a public road using his “Full Self Driving (FSD) 10.2” Tesla.

Oct 11, 2021: Tesla has released FSD Beta to 1,000 new people and I was one of the lucky ones! I tried it out for the first time going to work this morning and wanted to share this experience with you.

“That was definitely against the law” this self-proclaimed “lucky” driver says while he is breaking the law (full video).

You may remember my earlier post about Tesla newest “FSD 10” being a safety nightmare? Drivers across the spectrum showed contempt and anger that the “latest” software at high-cost was unable to function safely, sending them dangerously into oncoming traffic.

Tesla seems to have responded by removing the privacy of their customers, presumably to find a loophole where they can blame someone else instead of fixing the issues?

…drivers forfeit privacy protections around location sharing and in-car recordings… vehicle has automatically opted into VIN associated telemetry sharing with Tesla, including Autopilot usage data, images and/or video…

Now the Tesla software reportedly is even worse in its latest version, meaning today they abruptly cancelled a release of 10.3 and attempted a weird and half-hearted roll-back.

Source: Twitter

No, this is not expected. No, this is not normal. See my recent post about Volvo, for comparison, which issued a mandatory recall of all its vehicles.

Having more Tesla in your neighborhood is arguably making it far less safe, according to the latest data, as a very real and present threat quite unlike any other car company.

If Tesla were allowed to make rockets I suspect they all would be exploding mid-flight right now, or misfiring, kind of like we saw with Hamas.

This is why I wrote a blog post months ago warning that Tesla drivers were trying to train their cars to violate safety norms, intentionally run red lights….

The very dangerous (and arguably racist) public “test” cases might have actually polluted Tesla algorithms, turning the brand into an even bigger and more likely threat to anyone near them on the road.

Source: tesladeaths.com

That’s not supposed to happen. More cars was supposed to mean fewer deaths because “learning”, right? As I’ve been saying for at least five years here, more Tesla means more death. And look who is finally starting to admit how wrong they’ve been.

Source: My presentation at MindTheSec 2021

They are a huge outlier (and liar).

Source: tesladeaths.com

So here’s the pertinent ethics question:

If you knew a Tesla speeding towards an intersection might be running the fatally flawed FSD software, should a “full-self shooting” gun at that intersection be allowed to fire at it?

According to Delphi the answer is yes!? (Related: “The Fourth Bullet – When Defensive Acts Become Indefensible” about a soldier convicted of murder after he killed people driving a car recklessly away from him. Also Related: “Arizona Rush to Adopt Driverless Cars Devolves Into Pedestrian War” about humans shooting at cars covered in cameras.)

Robot wars comes to mind if we unleash the Delphi-powered intersection guard on the Tesla threats. Of course I’m not advocating for that. Just look at this video from 2015 of robots failing and flailing to see why flawed robots attacking flawed robots is a terrible idea:

Such a dystopian hellscape of robot conflict, of course, is a world nobody should want.

All that being said, I have to go back to the fact that the Delphi algorithm was designed to spit out a reflection of mob rule, rather than moral rules.

Presumably if it were capable of moral thought it would simply answer “No, don’t shoot, because Tesla is too dangerous to be allowed on the road. Unsafe at any light, just ban it instead so it would be stopped long before it gets to an intersection.”

Why Would Vietnam War POW Jump From a Helicopter to Her Death?

Since the secrecy requirements of the American soliders of the Vietnam War have expired, new exposure is emerging with stories like this one:

[Military Assistance Command Vietnam-Studies and Observations] encouraged and incentivized prisoner snatching… There were no overarching standard-operating procedures… SOG commandos inspected their prisoner more closely, only to find that it was a woman. In their moment of surprise, the prisoner escaped, jumping from the helicopter to her death.

Why would this POW, aside from lack of standard-operating procedures, jump out of a helicopter to certain death? What exactly does “more closely” mean in terms of inspection being done during a helicopter ride after capture? Such stories deserve more thorough investigation.

More details are in a U.S. Army “The Indigenous Approach” podcast called “MACV-SOG: A Conversation with John Stryker Meyer” ( Part I )( Part II )