Facebook Lied About Encryption and Mines Outrage for Profit

Update (Insider Sept 17):

  • The BBC in 2019 reported that human traffickers were using Facebook’s services to sell domestic workers.
  • Apple threatened to remove Facebook from its App Store after a report about an online slave market.
  • The Wall Street Journal reports that Facebook knew about the practice even before Apple made its threat.

There are two important and connected ethics stories in the news lately about Facebook management of user security.

The first is what I’ve been telling people about WhatsApp for several years now. The design of the product had a backdoor built-in and barely obscured.

On one recent call with a privacy expert and researcher they literally dropped off when I brought this fact up. After they went and did some digging they jumped back on that call and said “shit you’re right why aren’t people talking about this”. Often in security it’s unpleasant to be correct, and I have no idea why people choose the things to talk about instead.

I mean it was never much of a secret. Anyone could easily see (as I did, as that researcher did) the product always said if someone reported something they didn’t like when connected to another person their whole chat could be sent to Facebook for review. In other words a key held by a third party could unlock an “end-to-end” encrypted chat because of a special reporting mechanism.

That’s inherently a backdoor by definition.

A trigger was designed for a third party to enter secretly and have a look around in a private space. What if the trigger to gain entry was pulled by the third party and not the other two “ends” in the conversation?

I have seen exactly zero proof so far that Facebook couldn’t snoop without consent, meaning it’s plausible a third party could drop in whenever desired and undetected by using their trigger.

Again, the very definition of a backdoor.

Apparently this has finally become mainstream knowledge, which is refreshing to say the least. It puts to bed maybe that the Facebook PR machine for years has been spitting intentional bald-faced lies.

WhatsApp has more than 1,000 contract workers filling floors of office buildings in Austin, Texas, Dublin and Singapore, where they examine millions of pieces of users’ content. Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through streams of private messages, images and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems. These contractors pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute.

Policing users while assuring them that their privacy is sacrosanct makes for an awkward mission at WhatsApp. A 49-slide internal company marketing presentation from December, obtained by ProPublica, emphasizes the “fierce” promotion of WhatsApp’s “privacy narrative.” It compares its “brand character” to “the Immigrant Mother” and displays a photo of Malala ​​Yousafzai, who survived a shooting by the Taliban and became a Nobel Peace Prize winner, in a slide titled “Brand tone parameters.”

If you think that sounds awful. Here’s a flashback to a chalkboard-screeching 2019 tweet that may as well be from the ex-head of safety of a tobacco company on an investor/politics tour to claim how the mint-flavored cigarette filter was the most health-preserving thing of all time.

Source: Twitter

That is the ex-CISO of Facebook, who was “fired” into a job at Stanford to be a lobbyist for Facebook, on stage in Silicon Valley pumping investors.

The real privacy story is more like this illustration, where Facebook and Whatsapp are clearly toxic options:

So here is how that tweet immediately appeared in my mind:

Facebook is to privacy what cigars are to health.

Also that tweet is promoting the very same person who very recently was being promoted by Facebook and Stanford into a giant op-ed in the NYT… where he hypocritically attacked Apple regarding new engineering privacy-protections to protect children from harms.

Hypocrisy? Yes, it doesn’t get much worse, as others already have pointed out: Facebook executives seem to mostly gin up bogus outrage for self gain.

I mean it’s a strange fact that an ex-Facebook executive is crossing over to pollute mainstream news like the NYT with disinformation, given the research on media he surely knows already (being the sausage factory insider who oversaw obvious failures of safety)…

Misinformation on Facebook got six times more clicks than reputable news sites…

When I say he oversaw obvious failures, I don’t just mean all the breaches and integrity disasters, becoming a first-time CISO who within a couple years was flailing in the largest security disasters in history.

I really mean the internal memos saying Facebook is a nightmare platform of insecurity that causes direct harm to society.

One internal Facebook presentation said that among teens who reported suicidal thoughts, 13% of British users and 6% of American users traced the issue to Instagram.

“Thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse,” the researchers reportedly wrote.

This safety failure of weak and incompetent leadership at Facebook (now Stanford) has been the cause of dire consequences to our society, economy, and most importantly, national security.

The second point is thus Facebook finally is starting to face the book — it has been creating vitriol and outrage this whole time for self-gain and profit, carelessly using technology in a manner obviously counter-productive to health and safety of society.

What the AI doesn’t understand is that I feel worse after reading those posts and would much prefer to not see them in the first place… I routinely allow myself to be enraged… wasting time doing something that makes me miserable.

This is backed up by research of Twitter proving social media platforms effectively train people to interact with increasing hostility to generate attention (feeding a self-defeating social entry mechanism, like stealing money to get rich).

If you feel like you’re met with a lot of anger and vitriol every time you open up your social media apps, you’re not imagining it: A new study shows how these online networks are encouraging us to express more moral outrage over time.

What seems to be happening is that the likes, shares and interactions we get for our outpourings of indignation are reinforcing those expressions. That in turn encourages us to carry on being morally outraged more often and more visibly in the future.

What this study shows is that reinforcement learning is evident in the extremes of online political discussion, according to computational social psychologist William Brady from Yale University, who is one of the researchers behind the work.

“Social media’s incentives are changing the tone of our political conversations online,” says Brady. “This is the first evidence that some people learn to express more outrage over time because they are rewarded by the basic design of social media.”

The team used computer software to analyze 12.7 million tweets from 7,331 Twitter users, collected during several controversial events, including debates over hate crimes, the Brett Kavanaugh hearing, and an altercation on an aircraft.

For a tweet to qualify as showing moral outrage, it had to meet three criteria: it had to be a response to a perceived violation of personal morals; it had to show feelings such as anger, disgust, or contempt; and it had to include some kind of blame or call for accountability.

The researchers found that getting more likes and retweets made people more likely to post more moral outrage in their later posts. Two further controlled experiments with 240 participants backed up these findings, and also showed that users tend to follow the ‘norms’ of the networks they’re part of in terms of what is expressed.”

And that’s a great explanation for the chalkboard-screeching 2019 tweet from Facebook’s ex-CISO, which could also be described as an addiction to spreading misinformation.

Amnesty International perhaps put it best, when they alleged the ex-CISO…

…created a system that gives powerful users free rein to harass others, make false claims, and incite violence. “The message from Facebook is clear – if you’re influential enough, they’ll let you get away with anything.”

Is it any wonder Facebook lied about user safety if they fundamentally side with tyranny and don’t believe in accountability? The site was founded on the theory that Zuckerberg would never face consequences for violating privacy of young women and then trying to harm them against their will.

Even the WSJ reports the company has since been building upon market corruption and unfairness rooted in privilege abuse.

Company Documents Reveal a Secret Elite That’s Exempt. A program known as XCheck has given millions of celebrities, politicians and other high-profile users special treatment, a privilege many abuse…

The question really becomes, given such a company with the worst trust record in history with endless reports of security breaches and violations of privacy, who (besides Stanford) still believes anymore in this brand or its talking heads?

In other words it should surprise exactly nobody how Facebook executives put a backdoor into their encryption while fraudulently promoting it as the safest, all for their own enrichment through the suffering of others and especially young girls.

Or as I warned on this blog literally ten years ago in a post called “why I deleted Facebook“…

…private company funded by Russians without any transparency that most likely hopes to profit from your loss (of privacy)… if Facebook is dependent on Zuckerberg their users are screwed.

Is a Cookie Banner Ban Coming? “A Smoother Mechanism for Consent”

There is an interesting detail buried within an article about cookie banners:

“I often hear people say how tired they are of having to engage with so many cookie pop-ups,” said Denham. “That fatigue is leading to people giving more personal data than they would like. The cookie mechanism is also far from ideal for businesses and other organisations running websites, as it is costly and it can lead to poor user experience.

“While I expect businesses to comply with current laws, my office is encouraging international collaboration to bring practical solutions in this area.”

She will raise the issue during a virtual meeting with leaders from the US, France, Germany, Canada, Japan, Italy, the OECD and WEF. Each representative will suggest a technology or innovation issue on which they believe closer international cooperation is required.

Denham has indicated that a smoother mechanism for consent is already technologically possible and compliant with data protection regulations. No further detail was given on the mechanism, which was simply described as “an idea on how to improve the current cookie consent mechanism, making web browsing smoother and more business friendly while better protecting personal data”.

Very important to note fatigue being mentioned as a reason against consent being given to the person who would be most interested in the harms.

In this context of a data owner being undermined, read the start of that last paragraph again:

Denham has indicated that a smoother mechanism for consent is already technologically possible and compliant with data protection regulations.

If you haven’t already looked at how W3C Solid brings consent to the Web, now is a great time to start.

Also, I found it delightful that this article forced me into an ugly consent mechanism before I could read about why consent fatigue is real.

Do People Dump Too Much Privacy Using Smart Toilets?

The key context to consider with smart toilets is whether they enhance or detract from data analysis already being done at the block-level, let alone in bulk wastewater treatment analysis.

In other words, does generating more client-side analysis of human output (dare I call it log analysis) benefit the individual relative to having it done already on the service-side?

I’ve given presentations about this since at least 2012, where I warned how encryption and key management were central to protecting the privacy of toilet dumps (of data).

Anyway, fast forward a decade later and the WSJ wants you to believe that all this old debate is somehow a new topic being figured out by none other than the genocidal brand of Stanford.

The next frontier of at-home health tracking is flush with data: the toilet. Researchers and companies are developing high-tech toilets that go beyond adding smart speakers or a heated seat. These smart facilities are designed to look out for signs of gastrointestinal disease, monitor blood pressure or tell you that you need to eat more fish, all from the comfort of your personal throne.

Let me just make a few more points about Stanford ethical gaps, given the WSJ reports they are using Korea to manufacture their design into an entire toilet (instead of a more sensible sensor attachment, plumbing product, or a seat modification).

The Stanford team has signed an agreement with Izen, a Korean toilet maker, to manufacture the toilet. They hope to have working prototypes that can be used in clinical trials by the end of this year, says Seung-min Park, who leads the project, which was started by Sanjiv Gambhir, the former chair of radiology at Stanford, who died in 2020.

First, toilets are semi-permanent and rarely upgraded or replaced, so such a technology shift is a terrible idea from both a privacy and interoperability/freedom perspective. A vulnerability in the toilet design is a very expensive mistake, unlike a seat, sensor or plumbing change.

Second, of course Stanford did not go to Japan (arguably a country that is world leader in toilets alone as well as satiation technology) because the Japanese would have laughed Stanford out of the room for “inventing” something already decades old.

Look at this April 2013 news from Toto, for example:

An “Intelligence Toilet” system, created by Japan’s largest toilet company, Toto, can measure sugar levels in urine, blood pressure, heart rate, body fat and weight. The results are sent from the toilet to a doctor by an internet-capable cellular phone built into the toilet. Through long distance monitoring, doctors can chart a person’s physical well-being.

Or let’s look all the way back to May 2009 news, perhaps?

Toto’s newest smart john, the Intelligence Toilet II, is proving that it is more than an ordinary porcelain throne by recording and analyzing important data like weight, BMI, blood pressure, and blood sugar levels.

There’s a “sample catcher” in the bowl that can obtain urine samples. Even by Japanese standards that’s impressive. Yes it has the bidet, the air dryer, and heated seat, but it’s also recording pertinent information.

This information is beamed to your computer via WiFi and can help you, with the guidance of a trained physician, monitor health and provide early detection for some medical conditions.

The Japanese company Toto, a world-leading brand in toilets, is thus easily credited in the actual news with having these toilets available for purchase in the early 2000s. Definitely NOT new.

Even a world-recognizable Japanese technology company had had intelligent toilet sensors on the market for years already.

In September 2018, electronics giant Panasonic released a health-tracking toilet in China that tests the urine for blood, protein, and other key health indicators. The device also uses sensors embedded in an armrest to measure a person’s body fat and identify different users by scanning their fingerprints.

That’s a really good insight into why Stanford went to Korea to make a knock-off of Japanese designs — failed to partner with a Japanese company to design and release something that has been designed and released already for over a decade.

All this speaks to the weird relationship that American academic institutions have with journalists who publish unverified puff and PR instead of actual news.

Stanford somehow gets away with this regularly, along with brandishing a name that represents crimes against humanity.

Anyway, here are just some of my old slides from 2013, including examples for discussion of privacy technology for toilets well as some data from places like Chicago doing analysis of drug usage (illegal/counterfeit) on wastewater.

And I guess I also should mention in 2019 I wrote about all this with the title “Yet More Shit AI“.

Governance Always Has Been About “Nudge” Behavior Economics

The Guardian gets part of their history right in a new article about government use of political theory and economics:

British government’s fondness for minor behavioural modification tactics began in the David Cameron era…

Indeed, you may recall in 2014 we hosted a discussion on exactly that topic:

…interface between economics and political science in health care policy analysis… [for the] “Behavioural Public Policy“ an interdisciplinary and international peer-reviewed journal devoted to behavioural research and its relevance to public policy.

However, I find it interesting that the article doesn’t realize the London School of Economics itself was practically founded on the principles of nudge based on personal data to influence citizens.

And that was based on principles going back at least to the 1700s.

Source: flyingpenguin

Studying this in proper long-term history helps explain why so much of WWI and WWII has evidence of the British government’s fondness for minor behavior modification tactics, let alone during its colonial exploits — all frequent topics of this blog.