USAF Plans Around PsyOps Leaflet Dispersal Shortfall

Somehow I had missed some leaflet planning considerations over the past few years.

…shortfall on leaflet dispersal capability will jeopardize Air Force Central Command information operations,” said Earl Johnson, B-52 PDU-5/B project manager. The “Buff” can carry 16 PDU-5s under the wings, making it able to distribute 900,000 leaflets in a single sortie.

That’s a lot of paper.

Discussing leaflet drops the other night with a pilot set me straight on the latest methods and he emphasized some recent capability for computerized micro-target leaflet fall paths. It sounded too good to be true. I still imagine stuff floating everywhere randomly like snowflakes.

Then again he might have been pulling my leg, given how we were talking about psyops, or the person telling him.

Speaking of psyops, OpenAI is said to have leaked their monster into the world. We’re warned already it could unleash targeted text at scale even a B-52 couldn’t touch:

…extremist [use of OpenAI tools] to create ‘synthetic propaganda’ that would allow them to automatically generate long text promoting white supremacy or jihadist Islamis…

And perhaps you can see where things right now are headed with this information dissemination race?

U.S. seen as ‘exporter of white supremacist ideology,’ says counterterrorism official

Can B-52s leaflet America fast enough to convince domestic terrorists to stop generating their AI-based long texts promoting white supremacist ideology?

Is Stanford Internet Observatory (SIO) a Front Organization for Facebook?

A “Potemkin Village” is made from fake storefronts built to fraudulently impress a visiting czar and dignitaries. The “front organization” is torn down once its specific message/purpose ends.

Image Source: Weburbanist’s ‘Façades’ series by Zacharie Gaudrillot-Roy
Step one (PDF): Facebook sets up special pay-to-play access (competitive advantage) to user data and leaks this privileged (back) door to Russia.

(October 8, 2014 email in which Facebook engineer Alberto Tretti emails Archibong and Papamiltiadis notifying them that entities with Russian IP addresses have been using the Pinterest API access token to pull over 3 billion data points per day through the Ordered Friends API, a private API offered by Facebook to certain companies who made extravagant ads purchases to give them a competitive advantage against all other companies. Tretti sends the email because he is clearly concerned that Russian entities have somehow obtained Pinterest’s access token to obtain immense amounts of consumer data. Merely an hour later Tretti, after meeting with Facebook’s top security personnel, retracts his statement without explanation, calling it only a “series of unfortunate coincidences” without further explanation. It is highly unlikely that in only an hour Facebook engineers were able to determine definitively that Russia had not engaged in foul play, particularly in light of Tretti’s clear statement that 3 billion API calls were made per day from Pinterest and that most of these calls were made from Russian IP addresses when Pinterest does not maintain servers or offices in Russia)

Step two: Facebook CEO announces his company doesn’t care if information is inauthentic or even disinformation.

Most of the attention on Facebook and disinformation in the past week or so has focused on the platform’s decision not to fact-check political advertising, along with the choice of right-wing site Breitbart News as one of the “trusted sources” for Facebook’s News tab. But these two developments are just part of the much larger story about Facebook’s role in distributing disinformation of all kinds, an issue that is becoming more crucial as we get closer to the 2020 presidential election. And according to one recent study, the problem is getting worse instead of better, especially when it comes to news stories about issues related to the election. Avaaz, a site that specializes in raising public awareness about global public-policy issues, says its research shows fake news stories got 86 million views in the past three months, more than three times as many as during the previous three-month period.

Step three: Facebook announces it has used an academic institution led by former staff to measure authenticity and coordination of actions (not measure disinformation).

Working with the Stanford Internet Observatory (SIO) and the Daily Beast, Facebook determined that the shuttered accounts were coordinating to advance pro-Russian agendas through the use of fabricated profiles and accounts of real people from the countries where they operated, including local content providers. The sites were removed not because of the content itself, apparently, but because the accounts promoting the content were engaged in inauthentic and coordinated actions.

In other words you can tell a harmful lie. You just can’t start a union, even to tell a truth, because unions by definition would be inauthentic (representing others) and coordinated in their actions.

It’s ironic as well since this new SIO clearly was created by Facebook to engage in inauthentic and coordinated actions. Do as they say not as they do.

The Potemkin Village effect here is thus former staff of Facebook creating an academic front to look like they aren’t working for Facebook, while still working with and for Facebook… on a variation of the very thing that Facebook has said it would not be working on.

For example, hypothetically speaking:

If Facebook were a company in 1915 would they have said they don’t care about inauthentic information in “Birth of a Nation” that encouraged restarting the KKK?

Even to this day Americans are very confused whether the White House of Woodrow Wilson was coordinating the restart of the KKK, and they debate that instead of the obvious failure to block a film with intentionally harmful content intended to kill black people (e.g. huge rise in lynchings and 1919 Red Summer, 1921 Tulsa massacre, etc.).

Instead, based on this new SIO model, it seems Facebook of 1915 would partner with a University to announce they will target and block films of pro-KKK rallies on the basis of white sheets and burning crosses being inauthentic coordinated action.

It reads to me like a very strange us of API as privacy backdoors, as well as use of “academic” organizations as legal backdoors; both seem to mean false self-regulation, in an attempt to side-step dealing with the obvious external pressure to regulate harms from speech.

Facebook perhaps would have said in 1915 that KKK are fine if they call for genocide and the death of non-whites, as long as the KKK known to be pushing such toxic and inauthentic statements don’t put a hood on to conceal their face while they do it.

Easy to see some irony in how Facebook takes an inauthentic position, with their own staff strategically installed into an academic institution like Stanford, while telling everyone else they have to be authentic in their actions.

Also perhaps this is a good time to remember how a Stanford professor took large payments from tobacco companies to say cigarettes weren’t causing cancer.

[Board-certified otolaryngologist Bill Fees] said he was paid $100,000 to testify in a single case.


Updated November 12 to add latest conclusions of the SIO about Facebook data provided to them.

Considered as a whole, the data provided by Facebook — along with the larger online network of websites and accounts that these Pages are connected to — reveal a large, multifaceted operation set up with the aim of artificially boosting narratives favorable to the Russian state and disparaging Russia’s rivals. Over a period when Russia was engaged in a wide range of geopolitical and cultural conflicts, including Ukraine, MH17, Syria, the Skripal Affair, the Olympics ban, and NATO expansion, the GRU turned to active measures to try to make the narrative playing field more favorable. These active measures included social-media tactics that were repetitively deployed but seldom successful when executed by the GRU. When the tactics were successful, it was typically because they exploited mainstream media outlets; leveraged purportedly independent alternative media that acts, at best, as an uncritical recipient of contributed pieces; and used fake authors and fake grassroots amplifiers to articulate and distribute the state’s point of view. Given that many of these tactics are analogs of those used in Cold-War influence operations, it seems certain that they will continue to be refined and updated for the internet era, and are likely to be used to greater effect.

One thing you haven’t seen and probably will never see is the SIO saying Facebook is a threat, or that privately-held publishing/advertising companies are a danger to society (e.g. how tobacco companies or oil companies are a danger).

Electronic Warfare Planning and Management Systems

In 2014 I gave a series of talks looking at use of big data to predict effects/spread of disease, chemicals, bomb blast radius (especially in ubran areas) and how integrity controls greatly affected the future of our security industry.

This was not something I pioneered, by any stretch, as I was simply looking into the systems running on cloud by insurance companies. These companies were exhausting cloud capacity at that time to do all kinds of harm and danger predictions.

Granted I might have been the first to suggest a map of zombie movement (e.g. Russian infantry) would be interesting to plot, but the list of harm prediction goes on infinitely and everyone in the business of response wants a tool.

The 2015 electronic warfare (EW) activity in Ukraine and more recent experiences in Syria have prompted the US military to seek solutions in that area as well: given a set of features what could jamming look like and how should troops route around it, for example.

Source: “Electronic Warfare – The Forgotten Discipline… Refocus on this Traditional Warfare Area Key for Modern Conflict?” by Commander Malte von Spreckelsen, DEU N, NATO Joint Electronic Warfare Core Staff

It’s a hot topic these days:

The lack of understanding of the implications of EW can have significant mission impact – even in the simplest possible scenario. For example, having an adversary monitor one’s communications or eliminate one’s ability to communicate or navigate can be catastrophic. Likewise, having an adversary know the location of friendly forces based on their electronic transmissions is highly undesirable and can put those forces at a substantial disadvantage.

The US is calling their program Electronic Warfare Planning and Management Tool (EWPMT) and contractors are claiming big data analysis development progress already:

Raytheon began work on the final batch, known as a capability drop, in September. This group will use artificial intelligence and machine learning as well as a more open architecture to allow systems to ingest swaths of sensor data and, in turn, improve situational awareness. Such automation is expected to significantly ease the job of planners.

Niraj Srivastava, product line manager for multidomain battle management at Raytheon, told reporters Oct. 4 that thus far the company has delivered several new capabilities, including the ability for managers to see real-time spectrum interference as a way to help determine what to jam as well as the ability to automate some tasks.

It starts by looking a lot like what we use for commercial wireless site assessments starting around 2005. Grab all the signals by deploying sensors (static and mobile), generate a heatmap, and dump it into a large data store.

Then it leverages commercial agile development, scalable cloud infrastructure and machine learning from 2010 onward, to generate future predictive maps with dials to modify variables like destroying/jamming a signal source.

Open architectures for big data dropping in incremental releases. It’s amazing, and a little disappointing to be honest, how 2019 is turning out to be exactly what we were talking about in 2014.

$3M HIPAA Settlement for Hospital Failing Repeatedly to Encrypt Patient Data Over 10 Years

According to the HHS this hospital reported a breach in 2010, was given a warning with technical assistance, then was breached again in 2013 and 2017.

URMC filed breach reports with OCR in 2013 and 2017 following its discovery that protected health information (PHI) had been impermissibly disclosed through the loss of an unencrypted flash drive and theft of an unencrypted laptop, respectively. OCR’s investigation revealed that URMC failed to conduct an enterprise-wide risk analysis; implement security measures sufficient to reduce risks and vulnerabilities to a reasonable and appropriate level; utilize device and media controls; and employ a mechanism to encrypt and decrypt electronic protected health information (ePHI) when it was reasonable and appropriate to do so. Of note, in 2010, OCR investigated URMC concerning a similar breach involving a lost unencrypted flash drive and provided technical assistance to URMC. Despite the previous OCR investigation, and URMC’s own identification of a lack of encryption as a high risk to ePHI, URMC permitted the continued use of unencrypted mobile devices.

Encryption is not that hard, especially for mobile devices. Flash drives and laptops are trivial to enable and manage keys. It’s not a technical problem, it’s a management/leadership one, which is why these regulatory fines probably should be even larger and go directly into executive pockets.