Category Archives: Security

Future Illiteracy is the Inability to Unlearn

Alvin Toffler perhaps put it best in his famous 1970 book Futureshock

Source: Toffler, A. (1990). Future shock. United Kingdom: Random House. Page 414.

Toffler said the following, just before he admitted the idea was from Psychologist Herbert Gerjuoy of the Human Resources Research Organization (Department of the U.S. Army).

Students must learn how to discard old ideas, how and when to replace them. They must, in short, learn how to learn. Early computers consisted of a “memory” or bank of data plus a “program” or set of instructions that told the machine how to manipulate the data. Large late-generation computer systems not only store greater masses of data, but multiple programs, so that the operator can apply a variety of programs to the same data base. Such systems also require a “master program” that, in effect, tells the machine which program to apply and when. The multiplication of programs and addition of a master program vastly increased the power of the computer. A similar strategy can be used to enhance human adaptability. By instructing students how to learn, unlearn and relearn, a powerful new dimension can be added to education.

Learn how to discard old ideas, how and when to replace them to be literate.

Learn, unlearn and relearn to be literate.

Cornerstones of Attachment Research” is a new book that phrases it like this:

…early findings and ideas; these enter into circulation, ricochet and rebound among domains of practice, and get repeated and repeated. Later developments, even important ones, become difficult to access and incorporate…

John Maynard Keynes famously wrote something similar into the introduction to his 1936 “General Theory of Employment, Interest and Money” (paraphrased here):

The difficulty lies not so much in developing new ideas as in escaping from old ones.

And from a military perspective, I think Daniel Hulter’s USAF essay is my favorite take on the problem yet:

I’ve seen too many times when either those pilots mistook their role for simply keeper of the machine–they just let it do what it was gonna do, not seeing or caring where we fell outside the margins… or they saw that our situation wasn’t protected in policy and felt helpless themselves. They weren’t piloting the machine. They were just operators.

To be literate enough to become a pilot you have to extend beyond “keeper of the machine” and into adaptation, an example I often used to put in my big data security presentations in terms of flying versus fighting.

This was well known in WWII by American pilots over the Pacific Ocean who would come from below and behind to pick off Japanese pilots stuck in “keeper” mode formations.

Meta Slavery: It’s Time to Stop Blaming Surveillance

If someone continuously shames surveillance (as if pursuit of knowledge is a bad thing inherently) they’re headed down the dangerous and dark path of criminalizing knowledge.

Let’s be honest here. Facebook’s business model is not surveillance, it is slavery (an extreme form of debt capitalism). Here’s how a paper from 2016 laid out the problem:

…virtual world technology not only allows real-time synchronous communication and economic activity between users (who are commonly represented usually by graphical avatars), but also allows for the accurate logging of even the smallest economic activities that occur…

Renaming themselves to Meta is related to their quest to create a slavery world where people entering hand over their entire existence and then are prevented from owning their own bodies (whether work or leisure, public or private, as I warned in 2016 and illustrated again in 2019).

It is wrong to equate slavery to surveillance. This is like someone equating prison to building homes. Not all surveillance is bad. Not all homes are bad.

NO.

Don’t continue this false equivalence. Surveillance can be misused. Surveillance can be wrong. But also it can be very right and necessary. In fact it may be that surveillance is necessary to end slavery (knowledge of crimes against humanity).

Do not criminalize knowledge, do not shame gathering all knowledge just because in some cases it is wrong. That will backfire in a VERY bad way by ushering in even more harms from ignorance.

Parents surveil their children for safety, within an ancient consent model everyone is familiar with.

Doctors surveil their patients for rational reasons, as surely everyone would agree. Healthcare requires such a granular user surveillance model that breakthroughs in safety depend on a battery of monitoring devices. Science depends on surveillance so granular it’s literally microscopic.

Thus we must stop calling surveillance evil, given it’s the slavery built on top of surveillance that is evil. Surveillance capitalism sounds about as bad as intellectual property. Are we really prepared to say people shouldn’t ever be allowed to sell what they know?

And now this:

Facebook is an exemplar of surveillance capitalism, harvesting as much data as possible, pursuing scale at all costs, and coercing users toward clicking on ads and spending more time on the platform. It has no other appreciable values, certainly no moral ones, no matter what narrative Zuckerberg might retcon onto the company’s history. (The original sin of TheFacebook, of course, was Mark Zuckerberg’s libidinous mission to rank Harvard’s coeds by relative hotness.)

Everything distasteful about Facebook, including its unmanageable size, flows from its business model, which is infinitely scalable. Yes, we should break Facebook up, but we also must break its business model of ever more granular user surveillance.

Nobody should try to white wash slave plantations granular user surveillance platforms, since the actual business model is human trafficking. And you can’t just walk away from that fact, as Steven Levy sagely warns:

…hold on, Mark. The Company Formerly Known As Facebook, which I will henceforth refer to as TCFKAF, can’t really move forward on that course until it repairs the vehicle that got it where it is. No matter what Facebook is called now, its crisis is ongoing.

Steven really nails it here:

A metaverse migration raises a ton of thorny issues that Zuckerberg’s keynote skated over, or just didn’t mention. If we have such a big disinformation problem now, what would it be like if everything around us—from clothes to real estate to, well, ourselves—was made of information? How can you justify building a whole new economy based on buying virtual products when so much of the world’s population can’t afford basic real-world products?

TCFKAF has likely calculated that creating a world where their crimes against humanity go without any prosecution will have higher margins than the real world where they have to buy up all the lawyers and politicians to monopolize the courts and deny reality… thus renaming themselves to Meta.

…a “meta” in gaming terminology is a generally agreed upon strategy by the community. Said strategy is considered to be the most optimal way to win / has the best performance at a specific task. Some people have defined meta as an acronym meaning “most effective tactics available”.

Sounds like someone declaring themselves the “meta” as an act of obfuscating things to avoid admitting to that something they’re doing is objectionable. Like when American slaveholders called themselves “planters” to obfuscate their business model of raping women.

Branding yourself community leader when the community hates you isn’t supposed to work… Meta (μετά) in Greek means “above” (or beyond, after), such that TCFKAF is branding themselves above the law, or beyond the reach of the law.

In other words it it will be surveillance of TCFKAF that will bring the kind exposure and interventions that could help end their slavery practices. Leaking internal documents scares them because it is a form of surveillance of their crimes against humanity. I will bet anyone that coming Facebook announcements about “improving privacy” will be little more than thinly veiled attempts by them to destroy evidence of their crimes, only to continue them more secretly.

TCFKAF must have realized they were about to come under surveillance, and lose the game they thought they couldn’t lose; increased granular monitoring of their behavior started to generate public outrage, so they destroyed evidence and renamed themselves to God (as if a clever way to avoid punishment for immorality).

This was ironically raised in a recent heartfelt story by someone who decided not to be a spy:

…I had come round to the idea of watching people for a living, swept up in my escape fantasy I hadn’t given much thought to how it would feel to be watched myself.

So can we stop talking about surveillance already and instead just admit TCFKAF’s business model is slavery?

Meta slavery.

Source: Meta plantation

The Security Paradox: Higher Investment Leads to Less Safety

A new history book called “Hidden Gifts” dives into the complexity of Middle-East stability and comes up with this enticing premise:

…embroiled in a paradox—an ever-increasing demand for security despite the increasing supply…

Except, this isn’t a paradox.

Unregulated increases in security have an inversion effect, which is exactly why Facebook boasting about its massive spend has made it also the worst platform.

Compare Facebook to Equifax, for example. The latter had tightly constrained/managed spend and efficiencies that stabilized it and made it a leader in safety. The difference is the ethics of supply.

This is how the “imperial interests” mentioned by the new history book make its paradox thesis… not a paradox.

Here’s the full thesis of “Hidden Gifts”, where you can see the crucial link to “interests” guiding the supplies.

From Napoleon Bonaparte’s invasion of Egypt in 1798 to the foreign interventions in the ongoing civil wars in Syria, Yemen, and Libya today, global empires or the so-called Great Powers have long assumed the responsibility of bringing security to the Middle East. The past two centuries have witnessed their numerous military occupations to ‘liberate’, ‘secure’, and ‘educate’ local populations. Consulting fresh primary sources collected from some thirty archives in the Middle East, Russia, the United States, and Western Europe, Dangerous Gifts revisits the late eighteenth- and nineteenth-century origins of these imperial security practices. It questions how it all began. Why did Great Power interventions in the Ottoman Levant tend to result in further turmoil and civil wars? Why has the region been embroiled in a paradox—an ever-increasing demand for security despite the increasing supply—ever since? It embeds this highly pertinent genealogical history into an innovative and captivating narrative around the Eastern Question, freeing the latter from the monopoly of Great Power politics, and also foregrounding the experience and agency of the Levantine actors: the gradual yet still forceful opening up of the latter’s economies to global free trade, the asymmetrical implementation of international law from their perspective, and the secondary importance attached to their threat perceptions in a world where political and economic decisions were ultimately made through the filter of global imperial interests.

Facebook’s security officer spent more on security because he wanted less safety, which would further balloon his own interests in self enrichment (and his friends). It’s no coincidence he purchased a $3 million home in the hills above Silicon Valley. Would anyone think of him as a descendant of Napoleon?

Before he was hired to drive Facebook’s infamous collapse of safety he was at Yahoo for only about a year, where he secretly pulled $2 million out of their budget to hand out to his friends and followers (under the line item: “bug bounty”), which did exactly nothing in terms of platform safety. Yahoo after he left had to report their biggest breaches of safety in their history.

It’s perhaps counter-intuitive, yet if you place an ethics filter over security spend you can see how sometimes massive investments proposed by immoral security leaders are in fact predictably going to reduce safety, giving them an excuse to demand more.

More investment therefore can lead to more safety, yet only if that investment has proper governance — adheres to principles of ethics like inherited rights and external accountability for harms.

I’m reminded of the days before “mutually assured destruction” had its total meltdown in the Cuban Missile Crisis.

The unregulated hawks of America were trying to cook up another fictional “knockout punch” with a weapon (Project Pluto) that would demoralize the Soviets by being so egregiously awful to them.

This is a good reminder that the Japanese considered the nuclear bombs to be nothing more than a drop of rain in a hurricane that had lasted many months. In fact, the actual reason for rapid capitulation of the Japanese at the end of WWII was a fear of Soviet troops walking into their territory and seizing control.

In addition, the cost to American lives of the Manhattan Project is estimated to have been higher than what the Japanese suffered from it. And obviously it led to even further harms elsewhere, as well as destabilization of the world afterwards…

An expensive nuclear-ramjet-powered missile nonetheless following the fictional narrative of nuclear bombing, was conceived to fly around the world four times, dumping toxic radiation as it went, while lobbing hydrogen bombs with questionable accuracy.

Insanity? An excellent reminder of how “security investment” can be totally out of control without some basic morality as its guide.

Source: Herbert F. York, “The Debate Over the Hydrogen Bomb,” Scientific American, (Oct 1975) p. 110. Click to enlarge.

New Ways to Predict the Future With Machines Reading the Present

Usually I like to talk about making predictions about the future based on a reading of history.

However, I found two recent articles that forced me to think about more current publications helping to set a future course of science. Think Google, but not so evil because nothing to do with advertising.

First is a story about “Giant” that announced an index of 107 million papers in a way that cleverly navigates around present copyright laws.

Some researchers who have had early access to the index say it’s a major development in helping them to search the literature with software — a procedure known as text mining. Gitanjali Yadav, a computational biologist at the University of Cambridge, UK, who studies volatile organic compounds emitted by plants, says she aims to comb through Malamud’s index to produce analyses of the plant chemicals described in the world’s research papers. “There is no way for me — or anyone else — to experimentally analyse or measure the chemical fingerprint of each and every plant species on Earth. Much of the information we seek already exists, in published literature,” she says. But researchers are restricted by lack of access to many papers, Yadav adds. Malamud’s ‘General Index’, as he calls it, aims to address the problems faced by researchers such as Yadav.

Second is a paper on the prediction of research trends using computational analysis of available papers.

Here, we demonstrate the development of a semantic network for quantum physics, denoted SEMNET, using 750,000 scientific papers and knowledge from books and Wikipedia. We use it in conjunction with an artificial neural network for predicting future research trends. Individual scientists can use SEMNET for suggesting and inspiring personalized, out-of-the-box ideas. Computer-inspired scientific ideas will play a significant role in accelerating scientific progress, and we hope that our work directly contributes to that important goal.

Source: PNAS, January 28, 2020, vol. 117, no. 4. “The edges are formed when two concepts coappear in a title or abstract of any of the 750,000 papers”

It’s always tempting to invoke Douglas Adams’ famous “42” story when reading these types of articles.

The methods used look more mathematical, and rushed to conclusion, compared with something the seasoned historian might do to validate trends or meaning.