Update (Insider Sept 17):
- The BBC in 2019 reported that human traffickers were using Facebook’s services to sell domestic workers.
- Apple threatened to remove Facebook from its App Store after a report about an online slave market.
- The Wall Street Journal reports that Facebook knew about the practice even before Apple made its threat.
There are two important and connected ethics stories in the news lately about Facebook management of user security.
The first is what I’ve been telling people about WhatsApp for several years now. The design of the product had a backdoor built-in and barely obscured.
On one recent call with a privacy expert and researcher they literally dropped off when I brought this fact up. After they went and did some digging they jumped back on that call and said “shit you’re right why aren’t people talking about this”. Often in security it’s unpleasant to be correct, and I have no idea why people choose the things to talk about instead.
I mean it was never much of a secret. Anyone could easily see (as I did, as that researcher did) the product always said if someone reported something they didn’t like when connected to another person their whole chat could be sent to Facebook for review. In other words a key held by a third party could unlock an “end-to-end” encrypted chat because of a special reporting mechanism.
That’s inherently a backdoor by definition.
A trigger was designed for a third party to enter secretly and have a look around in a private space. What if the trigger to gain entry was pulled by the third party and not the other two “ends” in the conversation?
I have seen exactly zero proof so far that Facebook couldn’t snoop without consent, meaning it’s plausible a third party could drop in whenever desired and undetected by using their trigger.
Again, the very definition of a backdoor.
Apparently this has finally become mainstream knowledge, which is refreshing to say the least. It puts to bed maybe that the Facebook PR machine for years has been spitting intentional bald-faced lies.
WhatsApp has more than 1,000 contract workers filling floors of office buildings in Austin, Texas, Dublin and Singapore, where they examine millions of pieces of users’ content. Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through streams of private messages, images and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems. These contractors pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute.
Policing users while assuring them that their privacy is sacrosanct makes for an awkward mission at WhatsApp. A 49-slide internal company marketing presentation from December, obtained by ProPublica, emphasizes the “fierce” promotion of WhatsApp’s “privacy narrative.” It compares its “brand character” to “the Immigrant Mother” and displays a photo of Malala Yousafzai, who survived a shooting by the Taliban and became a Nobel Peace Prize winner, in a slide titled “Brand tone parameters.”
If you think that sounds awful. Here’s a flashback to a chalkboard-screeching 2019 tweet that may as well be from the ex-head of safety of a tobacco company on an investor/politics tour to claim how the mint-flavored cigarette filter was the most health-preserving thing of all time.
That is the ex-CISO of Facebook, who was “fired” into a job at Stanford to be a lobbyist for Facebook, on stage in Silicon Valley pumping investors.
The real privacy story is more like this illustration, where Facebook and Whatsapp are clearly toxic options:
So here is how that tweet immediately appeared in my mind:
Also that tweet is promoting the very same person who very recently was being promoted by Facebook and Stanford into a giant op-ed in the NYT… where he hypocritically attacked Apple regarding new engineering privacy-protections to protect children from harms.
Hypocrisy? Yes, it doesn’t get much worse, as others already have pointed out: Facebook executives seem to mostly gin up bogus outrage for self gain.
I mean it’s a strange fact that an ex-Facebook executive is crossing over to pollute mainstream news like the NYT with disinformation, given the research on media he surely knows already (being the sausage factory insider who oversaw obvious failures of safety)…
Misinformation on Facebook got six times more clicks than reputable news sites…
When I say he oversaw obvious failures, I don’t just mean all the breaches and integrity disasters, becoming a first-time CISO who within a couple years was flailing in the largest security disasters in history.
I really mean the internal memos saying Facebook is a nightmare platform of insecurity that causes direct harm to society.
One internal Facebook presentation said that among teens who reported suicidal thoughts, 13% of British users and 6% of American users traced the issue to Instagram.
“Thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse,” the researchers reportedly wrote.
This safety failure of weak and incompetent leadership at Facebook (now Stanford) has been the cause of dire consequences to our society, economy, and most importantly, national security.
The second point is thus Facebook finally is starting to face the book — it has been creating vitriol and outrage this whole time for self-gain and profit, carelessly using technology in a manner obviously counter-productive to health and safety of society.
What the AI doesn’t understand is that I feel worse after reading those posts and would much prefer to not see them in the first place… I routinely allow myself to be enraged… wasting time doing something that makes me miserable.
This is backed up by research of Twitter proving social media platforms effectively train people to interact with increasing hostility to generate attention (feeding a self-defeating social entry mechanism, like stealing money to get rich).
If you feel like you’re met with a lot of anger and vitriol every time you open up your social media apps, you’re not imagining it: A new study shows how these online networks are encouraging us to express more moral outrage over time.
What seems to be happening is that the likes, shares and interactions we get for our outpourings of indignation are reinforcing those expressions. That in turn encourages us to carry on being morally outraged more often and more visibly in the future.
What this study shows is that reinforcement learning is evident in the extremes of online political discussion, according to computational social psychologist William Brady from Yale University, who is one of the researchers behind the work.
“Social media’s incentives are changing the tone of our political conversations online,” says Brady. “This is the first evidence that some people learn to express more outrage over time because they are rewarded by the basic design of social media.”
The team used computer software to analyze 12.7 million tweets from 7,331 Twitter users, collected during several controversial events, including debates over hate crimes, the Brett Kavanaugh hearing, and an altercation on an aircraft.
For a tweet to qualify as showing moral outrage, it had to meet three criteria: it had to be a response to a perceived violation of personal morals; it had to show feelings such as anger, disgust, or contempt; and it had to include some kind of blame or call for accountability.
The researchers found that getting more likes and retweets made people more likely to post more moral outrage in their later posts. Two further controlled experiments with 240 participants backed up these findings, and also showed that users tend to follow the ‘norms’ of the networks they’re part of in terms of what is expressed.”
And that’s a great explanation for the chalkboard-screeching 2019 tweet from Facebook’s ex-CISO, which could also be described as an addiction to spreading misinformation.
Amnesty International perhaps put it best, when they alleged the ex-CISO…
…created a system that gives powerful users free rein to harass others, make false claims, and incite violence. “The message from Facebook is clear – if you’re influential enough, they’ll let you get away with anything.”
Is it any wonder Facebook lied about user safety if they fundamentally side with tyranny and don’t believe in accountability? The site was founded on the theory that Zuckerberg would never face consequences for violating privacy of young women and then trying to harm them against their will.
Even the WSJ reports the company has since been building upon market corruption and unfairness rooted in privilege abuse.
Company Documents Reveal a Secret Elite That’s Exempt. A program known as XCheck has given millions of celebrities, politicians and other high-profile users special treatment, a privilege many abuse…
The question really becomes, given such a company with the worst trust record in history with endless reports of security breaches and violations of privacy, who (besides Stanford) still believes anymore in this brand or its talking heads?
In other words it should surprise exactly nobody how Facebook executives put a backdoor into their encryption while fraudulently promoting it as the safest, all for their own enrichment through the suffering of others and especially young girls.
Or as I warned on this blog literally ten years ago in a post called “why I deleted Facebook“…
…private company funded by Russians without any transparency that most likely hopes to profit from your loss (of privacy)… if Facebook is dependent on Zuckerberg their users are screwed.