U.S. Fighting DisInformation? Look at 1932 Presidential Election

Regulation and targeted response strategies to fight disinformation worked after FDR took office in 1932, and it’s likely to work again today when someone will muster the national trust of residents ready to take action.

Without that kind of popular support, and by instead making conciliation to technology companies, it’s unlikely we’ll see any progress today.

DefenseOne writes there’s been a necessary shift in security from a focus entirely on confidentiality towards more integrity. They then propose three steps to get there.

First is better, faster understanding by the U.S. government of what disinformation American adversaries are spreading—or, ideally, anticipation of that spread before it actually happens. […]

Second is, in appropriate circumstances, the swift, clear, and direct intervention of U.S. government spokespersons to expose falsities and provide the truth. […]

Third is an expanded set of U.S. government partnerships with technologies companies to help them identify disinformation poised to spread across their platforms so that they can craft appropriate responses.

What this article misses entirely is what has worked in the past. Unless they address why that wouldn’t work today, I’m skeptical of their suggestions to try something new and untested.

Point one sounds like a call for more surveillance, which will obviously run into massive resistance before it even gets off the ground. So there’s a tactical and political headwind. Points two and three are unlikely to work at all.

The most effective government spokesperson in past typically was the President. That’s not possible today for obvious reasons. In the past the partnerships with technology companies (radio, newspaper) wasn’t possible, and it’s similarly not possible today. Facebook’s CEO has repeatedly said he will continue to push disinformation for profit.

I’ve been openly writing and presenting on this modern topic since 2012 (e.g. BSidesLV presentation on using data integrity attacks on mobile devices to foment political coups), with research going back to my undergraduate and graduate degrees in the mid-1990s.

What worked in the past? Look at the timeline after the 1932 Presidential election to 1940, which directly addressed Nazi military disinformation campaigns (e.g. America First) promoting fascism.

  1. Breakup of the organizations disseminating disinformation (regulation).
  2. Election of a President that can speak truth to power, who aligns a government with values that block attempts to profit on disinformation/harms (regulation).
  3. Rapid dissemination of antidotes domestically, and active response abroad with strong countermeasures.
Roosevelt defeats Nazis at the ballot box: “By 1932, Hearst was publishing articles by Adolf Hitler, whom Hearst admired for keeping Germany out of, as Hitler put it in a Hearst paper, “the beckoning arms of Bolshevism.” Hitler instead promoted a transcendent idea of nationalism—putting Germany first—and, by organizing devoted nationalist followers to threaten and beat up leftists, Hitler would soon destroy class-based politics in his country. Increasingly, Hearst wanted to see something similar happen in the United States.”

The question today thus should be not about cooperating with those who have been poisoning the waters. The question should be whether regulation is possible in an environment of get-rich-quick fake-it-til-you-make-it greedy anti-regulatory values.

Take Flint, Michigan water disaster as an example, let alone Facebook/Google/YouTube/WellsFargo.

After officials repeatedly dismissed claims that Flint’s water was making people sick, residents took action.

America has a history of bottom-up (populist) approaches to governance solving top-down exploitation (It’s the “United” part of USA fighting the King for independence). A bottom-up approach isn’t likely to come from the DefenseOne strategy of partnerships between big government and big technology companies.

In fact, with history as our guide, we can see how President Reagan’s concept of partnership with big technology was to remove protection of American children from predators (promoting “ideological child abuse” for profit), as I explained in my 2018 OWASP talk “Unpoisoned Fruit“.

I’m not saying it will be easy to rotate to populist solutions. It will definitely be hard to take on broad swaths of corrupt powerful leaders who repeatedly profit from poisoning large populations for personal gains.

Yet that’s the obvious fork in our road today, and even outside entities know they can’t thrive if Americans choose to be united again in their take-down of selfish profiteers who now brazenly argue for their right to unregulated harms in vulnerable populations.

If Zuckerberg were CEO of Juul… right now he’d be trying to excite investors by saying ten new fruity tobacco flavors are coming next quarter for freedom-loving children.

The boss of e-cigratte maker Juul stepped down on Wednesday in the face of a regulatory backlash and a surge in mysterious illnesses linked to vaping products.

I wrote in 2012 about the immediate need for regulation of vaping. Seven years later that regulation finally is happening, sadly after dozens have been dying suddenly and without explanation. A partnership with tobacco companies was never on the table.

Bottom line is if you ever wonder why a Republican party today would undermine FCC and CIA authority, look at FDR’s creation of them to understand how and why they were designed to block and tackle foreign fascist military and domestic disinformation campaigns.


Update November 11, 2020:

First, a new story reports during the Reagan administration big oil founded large fraudulent disinformation campaigns to poison American thinking about environmental health and safety.

As part of its services to the industry, FTI monitored environmental activists online, and in one instance an employee created a fake Facebook persona — an imaginary, middle-aged Texas woman with a dog — to help keep tabs on protesters. Former FTI employees say they studied other online influence campaigns and compiled strategies for affecting public discourse. They helped run a campaign that sought a securities rule change, described as protecting the interests of mom-and-pop investors, that aimed to protect oil and gas companies from shareholder pressure to address climate and other concerns…

Founded in 1982 in Annapolis, Md., as a firm that provided expert witnesses and presentations for litigation, FTI has grown into a multinational firm that employs almost 5,000 people in 28 countries. Its business spans a wide range of services, from business consulting to crisis communications.

Second, the FTC calls out Zoom for being a fraud, yet neither penalizes them nor compensates their victims.

Use of Zoom software…

‘increased users risk of remote video surveillance by strangers and remained on users’ computers even after they deleted the Zoom app, and would automatically reinstall the Zoom app—without any user action—in certain circumstances,’ the FTC said. The FTC alleged that Zoom’s deployment of the software without adequate notice or user consent violated US law banning unfair and deceptive business practices.

And they basically lied for years and years about security.

…Zoom claimed it offers end-to-end encryption in its June 2016 and July 2017 HIPAA compliance guides… also claimed it offered end-to-end encryption in a January 2019 white paper, in an April 2017 blog post, and in direct responses to inquiries from customers and potential customers… In fact, Zoom did not provide end-to-end encryption for any Zoom Meeting…

I’ve written before about Zoom’s egregious bad-faith business practices here and here.

‘Poem to Get Rid of Fear’

Fear Poem, or I Give You Back

by Joy Harjo, the current poet laureate of the U.S.

“Because of the fear monster infecting this country, I have been asked for this poem, this song. Feel free to use it, record it, and share. Please give credit. This poem came when I absolutely needed it. I was young and nearly destroyed by fear. I almost didn’t make it to twenty-three. This poem was given to me to share.” — Joy Harjo

I release you, my beautiful and terrible
fear. I release you. You were my beloved
and hated twin, but now, I don’t know you
as myself. I release you with all the
pain I would know at the death of
my children.
You are not my blood anymore.
I give you back to the soldiers
who burned down my home, beheaded my children,
raped and sodomized my brothers and sisters.
I give you back to those who stole the
food from our plates when we were starving.
I release you, fear, because you hold
these scenes in front of me and I was born
with eyes that can never close.
I release you
I release you
I release you
I release you
I am not afraid to be angry.
I am not afraid to rejoice.
I am not afraid to be black.
I am not afraid to be white.
I am not afraid to be hungry.
I am not afraid to be full.
I am not afraid to be hated.
I am not afraid to be loved.
to be loved, to be loved, fear.
Oh, you have choked me, but I gave you the leash.
You have gutted me but I gave you the knife.
You have devoured me, but I laid myself across the fire.
I take myself back, fear.
You are not my shadow any longer.
I won’t hold you in my hands.
You can’t live in my eyes, my ears, my voice
my belly, or in my heart my heart
my heart my heart
But come here, fear
I am alive and you are so afraid
of dying.

Massive Biometric Data Breach Traced to 2014 Yahoo

A privacy breach affecting hundreds of thousands of users in June 2014 happened several months after Yahoo had hired a CSO who publicly boasted he personally was the reason users could trust security of the service. He then quietly left in disgrace, failing to reveal this and other breaches, to become the CSO at Facebook and repeat the same story of breaches, again leaving in disgrace.

After his two failed attempts at CSO, generating a wake of unprecedented unreported global breaches and privacy disasters in just three years, he then took up an ill-conceived “academic” role at Stanford. What may be of importance now, based on the latest revelations, is relationships between academic staff and Facebook.

Ira Kemelmacher-Shlizerman is a “Science-Entrepreneur” on a Google moonshot project, prior serving two years at Facebook. Her MegaFace “science” project to collect human faces for surveillance technology was done without user consent and is alleged to violate biometric privacy law.

March 2014 saw the following highly unusual PR campaign on finance.yahoo.com that seemed to frame security teams as competitors instead of collaborative industry peers:

Watch out, Google. The rumors are true. Yahoo has officially stepped up its security A-game. It’s called Alex Stamos.

Yahoo announced yesterday that it hired the world-renowned cybersecurity expert and vocal NSA critic to command its team of “Paranoids” in bulletproofing all of its platforms and products from threats that will surely come.

Bulletproofing. Who says that? Someone who doesn’t understand the role of CSO. “Vocal NSA critic” is a reference to when Stamos was parroting anti-government talking points (he stood in front of the head of NSA and bizarrely alleged the US should be treated as morally equivalent to Russia, China, Saudi Arabia…when discussing key management).

What these PR campaigns by Stamos failed to include was the fact that he had no prior experience as a CSO, let alone experience leading security operations for a public company, let alone management experience to handle a large complex organization.

His lack of experience very soon after manifested in some of the largest privacy breaches in history, as revealed by those who ended up involved in his catastrophic tenures.

For example look at June 2014, just three months after those Yahoo “bulletproofing” boasts attempted to juice stocks, an unprecedented breach of privacy happened, violating American biometric data protection law:

In June 2014, seeking to advance the cause of computer vision, Yahoo unveiled what it called “the largest public multimedia collection that has ever been released,” featuring 100 million photos and videos. Yahoo got the images — all of which had Creative Commons or commercial use licenses — from Flickr, a subsidiary.

…researchers who accessed the database simply downloaded versions of the images and then redistributed them, including a team from the University of Washington. In 2015, two of the school’s computer science professors — Ira Kemelmacher-Shlizerman and Steve Seitz — and their graduate students used the Flickr data to create MegaFace.

That breach method should sound familiar. Anyone looking at the Cambridge Analytica incident in 2015 at Facebook would recognize it.

And as the facts would have it Stamos abruptly and quietly left Yahoo in June 2015 to join Facebook as their CSO. Then a month later a report surfaced that said American billionaires were actively using data mining in centralized data repositories to drive political coups.

Cambridge Analytica is connected to a British firm called SCL Group, which provides governments, political groups and companies around the world with services ranging from military disinformation campaigns to social media branding and voter targeting.

So far, SCL’s political work has been mostly in the developing world — where it has boasted of its ability to help foment coups.

By December 2015, despite these warnings, the Guardian broke a story on researchers taking data from Facebook without user consent.

Documents seen by the Guardian have uncovered longstanding ethical and privacy issues about the way academics hoovered up personal data by accessing a vast set of US Facebook profiles, in order to build sophisticated models of users’ personalities without their knowledge.

The FBi has released 2015 internal email threads from Facebook (PDF) where staff were discussing Cambridge Analytica.

Sept 30, 2015. 12:17PM. To set expectations we can’t certify/approve apps for compliance, and it’s very likely these companies are not in violation of any of our terms. …if we had more resources we could discuss a call with the companies to get a better understanding, but we should only explore that path if we do see red flags.

There are two major security leadership problems here.

One, it reflects a team that only will look for danger if they can get more resources.

If they could acknowledge what they were doing wasn’t working, getting more resources and doing more of that same thing probably wouldn’t change the situation. They would have to admit they don’t know what we they are doing, which seems unlikely for a company with a CSO that has no real prior experience.

Two, given the first point, this also suggests the team only would look for evidence of smoke after they see evidence of fire, provided to them by the arsonists. That’s not the sort of thing that deserves more resources.

It reads basically like someone was running a self-funding plan to deliver an absolute least amount of security services possible. Such a mindset may be common for a CTO who is hoping to build a high-margin minimum-viable product (MVP). Yet it reads to me as entirely inverted from expected CSO ethical models.

While Facebook has repeatedly stated after the fact that Cambridge Analytica exploits were a “clear lapse” by their security team, we increasingly see evidence these security lapses may have also been present a year before under the same CSO at a different company.

After her project soaking up hundreds of thousands of people’s faces from the Yahoo services, the ones Stamos boasted he would protect, Kemelmacher-Shlizerman in 2016 also joined Facebook.

In somewhat related news, Facebook still is on the hook for a $35 billion class-action lawsuit filed in 2015, the year Stamos joined as CSO and the year before Kemelmacher-Shlizerman joined.

The suit alleges that Illinois citizens didn’t consent to having their uploaded photos scanned with facial recognition and weren’t informed of how long the data would be saved when the mapping started in 2011. […] Filed in 2015, Facebook has done everything to try to block the class action case, from objecting to definitions of tons of words in the suit to lobbying against the underlying Biometric Information Privacy Act. The class action poses an even greater penalty than the record-breaking $5 billion settlement Facebook agreed to over violations of its FTC consent decree. Though that payment amounts to a fraction of the $55 billion in revenue Facebook earned last year, it’s also been saddled with tons of new data privacy and transparency requirements. The $35 billion threat coming into focus contributed to a 2.25% share price drop for Facebook today.

There’s a good chance this case is too political to survive a Supreme Court test.

The Facebook DC office is run by Joel Kaplan. He is the guy who infamously sat next to Kavanaugh while allegations of sexual assault were denied. Kaplan serves the extreme-right nationalist publications like Breitbart and the Daily Caller by linking them to Facebook management. That also is why right now we’re about as likely to see Stamos held accountable for his disasters as anyone who failed upward into the White House.

But the real lesson here is that Americans are overly fixated on a singular individual as savior, despite society taking a huge risk by following their unexpected jumps. Whether it be Stamos, Snowden or Assaange there increasingly is a toxic exhaust from their meteoric failures that a Canadian marketing journal recently described best:

We have become fascinated with strong individuals: Ninjas, rockstars and 30 under 30s. We hail unicorns and disruptors, and we mock those on the decline as dinosaurs or people who couldn’t see the writing on the wall. Celebrating individual achievements is fine, but when we forget about the importance of community, I believe we all suffer…

New York SHIELD Act (S.5575B/A.5635) Deadline Oct 23

The arrival of colonists from both the Netherlands and England in the mid-1600s marked a tragic end to Native Americans living in New York despite their SHIELDS.

In New York political circles it’s called the Stop Hacks and Improve Electronic Data Security (SHIELD) Act. Unless I’m reading that sentence wrong, it should be called the SHIELDS Act.

Leaving off the last S is kind of ironic, when you think about an act meant to prevent people from leaving off security.

In any case, S.5575B/A.5635 was meant to impose stronger regulations to force notification of security breaches for any New York resident’s data. Passed July 25th this year (three days after the NY government announced a $19.2 million settlement with Equifax over their data breach), its breach notification becomes effective one week from today (240 days after passage) on October 23, 2019.

Notable changes:

  • Broader definition of a breach: unauthorized access to private information
  • Broader definition of private information: includes bank account and payment data, biometric information and email addresses with any corresponding passwords or recovery flow (security questions and answers)
  • Broader definition of whose information is protected: any NY resident no matter where their data is stored (not just business operations in NY)
  • New state government notification requirements (deadline for data protection programs is March 21, 2020, but data breaches must be recorded starting October 23, 2019)
  • “Tailored” data security requirements based on “size of a business”

The inclusion of biometric information in state data protection legislation is a huge deal in America. This recently has come to light when people realized their rights were being egregiously violated by technology companies, given Illinois regulations that already are ten years old:

As residents of Illinois, they are protected by one of the strictest state privacy laws on the books: the Biometric Information Privacy Act, a 2008 measure that imposes financial penalties for using an Illinoisan’s fingerprints or face scans without consent. Those who used the [unprecedentedly huge facial-recognition database called MegaFace] — companies including Google, Amazon, Mitsubishi Electric, Tencent and SenseTime — appear to have been unaware of the law, and as a result may have huge financial liability, according to several lawyers and law professors familiar with the legislation.