Could Interoperable Decentralization of Data Help Its Integrity Problems?

Ivermectin research is plagued with data integrity failures, raising an important question for security and privacy professionals: what better data control options are available?

The latest news seems right on track to demand interoperability from technology that facilitates more individually controlled patient data stores:

…calling for scientists to adopt a new standard for meta-analyses, where individual patient data, not just a summary of that data, is provided by scientists who conducted the original trials and subsequently collected for analysis”.

In other words using the Solid protocol would enable patients to participate in a consensual study by opening access to their data for research, while still allowing the highest possible integrity.

Saying “accuracy is still bad” is the defining security story of the 2010s and now 2020s as well… seriously holding back technology usefulness by undermining knowledge.

Integrity is lacking innovation and needs a complete new approach; it’s way too far behind where we are in terms of confidentiality and availability control engineering.

Robo-Debt and the Algorithm-Industrial Complex

This paragraph caught my attention, as I’ve been trying to shift the discussion from surveillance to debt capitalism.

What does this algorithm-industrial complex look like and who is involved?

Perhaps our first glimpse of the catastrophic impact of algorithms on civil society was robodebt, deemed unlawful by the Federal Court in a blistering assessment describing it as a “massive failure in public administration” of Australia’s social security scheme.

Very well said.

The algorithm-industrial complex is characterised by a power and skills distortion: public sectors gutted of skills, and the influence of and outrageous expenditure on outsourcing, tech, and consultants.

…what defines the algorithm-industrial complex is the emergence of policy which can only be executed via algorithms.

That’s not quite right, since humans have used algorithms for thousands of years, but I get the point.

One of Khwãrezm’s most famous residents was Muhammad ibn Mūsa al-Khwarizmī, an influential 9th century scholar, astronomer, geographer, and mathematician known especially for his contributions to the study of algebra. Indeed, the latinization of his name, which meant ‘the native of Khwãrezm’ in Persian, gave English the word algorithm. He wrote a book in Arabic about Hindu-Arabic numerals; the Latin translation of the book title was Algoritmi de numero Indorum (in English Al-Khwarizmi on the Hindu Art of Reckoning).

Although the word algorithm can be traced to this man’s name in ancient Baghdad, it’s really the even more ancient Babylonians who started using algorithms 4,000 years ago, as computer scientist and mathematician Donald E. Knuth wrote in Ancient Babylonian Algorithms, Communications of the ACM 15, no. 7 (July 1972) 671-77:

One of the ways to help make computer science respectable is to show that is deeply rooted in history, not just a short-lived phenomenon. Therefore it is natural to turn to the earliest surviving documents which deal with computation, and to study how people approached the subject nearly 4000 years ago. […] The calculations described in Babylonian tablets are not merely the solutions to specific individual problems; they are actually general procedures for solving a whole class of problems.

The problem today thus isn’t someone or something following instructions, its how people centralize instructions to be brittle and dictatorial (concentrate wealth through high exit barriers) instead of making them flexible to embrace the compromise of representative democracy.

Hitler’s devout followers are not much different from a descendant of his followers who sets up companies of self-serving algorithms. So the real danger is a policy dependent on an intentionally monopolistic (fascist) model of proprietary technology, such as Palantir.

David Hume warned of exactly this in the 1700s, which is very late when you think about how old algorithms really are.

Is Belief In the Supernatural Declining?

A rather annoying conversation I had recently with a farmer in a southern U.S. state went something like this:

  • Him – i’m telling you that UFO are real
  • Me – in what sense is a UFO real?
  • Him – scientists admit flying objects can’t be identified
  • Me – you believe something observed may be open to interpretation? like a point of light could be Mars until we realize it’s Venus?
  • Him – right, therefore aliens are real

Hopefully you see the problem. There was no convincing him that aliens aren’t real, because there could be doubt — at any level — therefore everything could be doubted at every level such that anything imaginable was as good as real.

Think of his position like this. If a car approaching you at night is missing a headlight, you might wonder if it’s a car or a motorcycle. Yet this guy seeing a single light believes he is about to be the first to prove the existence of a one-eyed space monster. Probability?

Philosophers dispensed with such nonsense in the 1700s with empiricism, and certainly in the 1900s established logic and reasoning to guide our rational approach to the unknown. Popper’s work in particular is important using a falsification method.

Unfortunately, an allure of mysticism is strong especially during uncertain times such as domain shifts in technology that force people to deal with lots of unknowns (technology destroying some of their assumptions, like suddenly losing old routines of working from an office and commuting in a car everywhere).

Bennett discuses the problem a bit in the 1999 book “Alas, Poor Ghost!”

It must be stressed that these women were not ignorant or ill-educated; nor were they socially or geographically isolated. They were dignified, sensible, experienced women, living in a middle-class suburb in a large city. Neither were they in any way eccentric;
on the contrary, they were pillars of their church and local community, essentially “respectable” in even the narrowest sense of that unpleasant term. Figures such as these do not at all give the impression that belief in supernatural cause and effect is declining.
It would seem that the world view of quite a substantial proportion of the population is probably decidedly less materialistic than scientists and historians imagine.

I’ll go one further. The celebrated Winchester House in Silicon Valley wasn’t about lack of education, and especially wasn’t about eccentricity, despite all appearances to the contrary.

Winchester was a foreshadowing of power and cognitive blindness, spending money into fraud, much in the same way that Silicon Valley today sees their “singularity” and “metaverse”. People are building a modern software version of Winchester’s infamous hardware “stairway into the ceiling” and “doors that open to a giant drop”.

Source: Alas, Poor Ghost!

I’ve written before about this and presented it many times in terms of Advanced Fee Fraud. The more I study the problem, the more widespread I’m finding it as a function of human susceptibility to social engineering.

Facebook Facial Recognition Was Criminal. Deleting It Is A Coverup Story.

Facebook announced very publicly it was deleting its trove of facial recognition data. Somehow this has been falsely reported as Facebook won’t use facial recognition.

Let me be very clear here: Facebook said it will continue using facial recognition.

The reports bury this fact so far down it’s highly suspicious. Why would all the headlines say Facebook has stopped using facial recognition while in fact carrying a buried lede like this one:

…the company signaled facial recognition technology may be used in its products in the future.

Future? That’s today. Facebook is literally saying they will continue to use facial recognition. Please everyone stop reporting this as an end to their use!

NO NO NO and NO

Even worse, Facebook tried to use specious safety reasons to argue that facial recognition has a notable upside.

Meta’s vice-president of artificial intelligence, Jerome Pesenti, said the technology had helped visually impaired and blind users.

Capturing faces was to help visually impaired and blind Facebook users? Come on. This is like someone saying at least fascism kept the trains running on time (it didn’t). What a way to throw its blind users under the bus. You think facial recognition is bad? Well now Facebook is telling you you’re a bad person because you must hate blind people.

Let me very clear here: Facebook is covering something up that is very bad.

Deleting all those Yann LeCunn developed templates and databases from sub-second facial scans has to be related to the fact that regulators are coming, and that internal documents are leaking.

Privacy watchdogs in Britain and Australia have opened a joint investigation into facial recognition company Clearview AI…

This is the actual headline we should be seeing for Facebook, not a bunch of puffery about it being the good guy for deleting data.

…following an investigation, Australia privacy regulator, the Office of the Australian Information Commissioner (OAIC), has found that the company breached citizens’ privacy. “The covert collection of this kind of sensitive information is unreasonably intrusive and unfair,” said OAIC privacy commissioner Angelene Falk in a press statement. “It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime…”

Facebook likely knows of a serious breach and ongoing misuse of its facial recognition data such that it’s covering up here (not unlike the massive coverup operation by Yahoo when it was breached).

Is there a smoke plume coming out of headquarters for all the internal memos on facial recognition they’re burning right now?

Black smoke rises from the roof of the Consulate-General of Russia Friday, Sept. 1, 2017, in San Francisco. The U.S. on Thursday ordered Russia to shut its San Francisco consulate and close offices in Washington and New York within 48 hours in response to Russia’s decision last month to cut U.S. diplomatic staff in Russia. Fireman were called to the consul, but were turned away after being told there was no problem. (AP Photo/Eric Risberg)

Why else would their PR department be so odiously twisting news into “we’re shutting down facial recognition” while in the same breath saying “we’re still using facial recognition”.

Shame on journalists for reporting without doing a common sense check of the content.

Something very bad must have happened (after all, we’re talking about Facebook), and management seems to be pushing a very hot global coverup operation by manipulating the news cycles to get ahead of it.