How Facebook Avoids Consequences for Crimes

Yet ANOTHER bone-head security screw-up at Facebook.

Source: BuzzFeedNews

And in that article you will find this sentence:

‘The authors never intended to publish this as a final document to the whole company, a Facebook spokesperson said in a statement.

NEVER INTENDED.

Intended to publish? Does it matter what they intended to publish?

After this internal report went public (exposing how white nationalist violence was being facilitated) the Facebook decision to deny their internal staff access to the report is giant head-in-sand move.

Imagine the U.S. government responding to Watergate by saying they never intended to have evidence of crimes seen by the whole country.

And it also reminded me of a very old story.

That faulty “never intended” excuse is literally out of the origin story of Facebook when Zuckerberg was rightfully accused of gross privacy violations (exposing how white male abuse of minority women was being facilitated).

Comments on the e-mail lists of both Fuerza Latina and the Association of Harvard Black Women blasted the site.

“I heard from a friend and I was kind of outraged. I thought people should be aware,” said Fuerza Latina President Leyla R. Bravo ’05, who forwarded the link over her group’s list-serve.

Zuckerberg said that he was aware of the shortcomings of his site, and that he had not intended it to be seen by such a large number of students.

HAD NOT INTENDED.

Intended to be seen? Does it matter what he intended to be seen?

Zuckerberg was aware of the problems and did it anyway because… didn’t intend for his crimes to be seen by people who would hold him accountable.

The Stanford athlete didn’t intend to be seen raping a girl, although he was aware of the shortcomings of his actions. The Nazis didn’t intend for their communications to be seen by such a large number of people, although they were aware of the shortcomings of genocide.

It’s like a full admission that he does crimes because he doesn’t expect to get caught, and when he’s caught he just says he didn’t expect to get caught, and then moves on.

With that in mind, the Facebook internal report reveals that “Stop the Steal” was generating speech that was 30% hate and 40% violent insurrection, yet allegedly staff couldn’t decide if that meant they should do something about it. Look at the percentages on the left versus the norms on the right.

The platform graded their own response to imminent danger to democracy as lazy and piecemeal.

…very difficult to know whether what we were seeing was a coordinated effort to delegitimize the election, or whether it was protected free expression by users who were afraid and confused and deserved our empathy…

Coordinated or uncoordinated, afraid and confused or not, violent hate speech doesn’t often get framed as needing… empathy.

I mean 40% violent speech laced with hate for America flows through their system and Facebook is like oh, look dangerous white nationalism, maybe this time the usual “afraid and confused” Nazis will win and Facebook can take credit for “helping” Nazis during their time of need?

Will France be Worse Off Using AI for Anti-Terrorism?

News from France sounds exactly backwards to me:

French intelligence officials to use older intelligence data, including data the government isn’t currently allowed to retain, to train AI systems.

Such an approach should be called out for what it is, repeating the worst mistakes in history at faster speed with less oversight.

Think of it this way, if you predicted any future police action in France from learning their past tragic history of colonialism you would repeat it instead of shifting towards what should happen instead.

I just recorded a new presentation for the 2021 RSA Conference about this exact problem. AI can’t be implemented as a detection system for terrorism without the heavy hand of human philosophy and control over what is defined as future terrorism.

Doghouse: How Not to Build a Club or a House.

Source: Leaked Clubhouse architectural rendering of their designs.

There’s not much to add to a brilliant take-down of the toxic and completely tone-deaf platform just launched called Clubhouse.

…demonstrates a growing chasm between attitudes in the United States and Europe about data governance, as Silicon Valley continues to export its technology and ideals around the world. Scraping is the same technique that controversial start-up Clearview AI, popular with law enforcement, has used to amass its facial recognition database. Although it’s received cease-and-desist letters from Facebook and Google (who themselves would not exist but for scraping and, in the case of Facebook, scraping non-public information), Clearview AI defends its practices on First Amendment grounds. In Europe, where data governance is more concerned with the fundamental rights of individuals than with the rights of corporations, techniques like scraping and the repurposing of publicly accessible data conflict with core principles in the General Data Protection Regulation, such as purpose limitation, notification and consent requirements, the individual’s right to object to certain processing and more. Clubhouse is already under investigation by data protection authorities in both France and Germany for violations of data protection law.

Perhaps it’s a bit unfair to say that the United States across the board has the same attitude, as many people disagree (myself included, hellooo!).

More accurate in my mind is to say there is a chasm between irresponsible bad-actors thriving in an unregulated United States (e.g. Silicon Valley) and Europe.

This perhaps is explained best in the next section of the article, which really struck me as a repeat of the Google Bus story.

While there are bad attitudes in the United States, they in fact have a growing chasm from other people in the United States.

It is that kind of exclusivity and bogus ennoblement expressing false privilege, all done by design, that makes Clubhouse so inherently and willfully evil.

Clubhouse’s gaslighting on privacy and security concerns pales in comparison to its disregard for accessibility. In its quest for exclusivity, Clubhouse has managed to exclude large swaths of the population.

Boom. The author just described the infamous Google Bus.

Contact Tracing Fail: Why is Google So Bad at Basic Security and Privacy?

Years ago I wrote about Google’s calculator absurdly requiring permission for network access.

A calculator requires network?

Looking back now, and based on recent headlines, perhaps the calculator story should have been front page news.

Someone just prompted me to answer why Google’s Authenticator app needs to track location and data, and the calculator immediately came to mind. I guess Google is giving me a reason to write analysis of 2FA privacy options better than theirs.

In related news, lately we’re all talking a lot about contact tracing safety and, surprise surprise, Google has screwed up that security as well.

Researchers say hundreds of preinstalled apps can access a log found on Android devices where sensitive contact tracing information is stored.

A calculator misstep seems comical, yet this kind of privacy failure can be catastrophic.

Let this forever be proof that “too big to fail” is a logical fallacy, not to mention an economic fantasy.

The Markup digs even deeper at Google, pointing out an apparent slow response and lack of concern about user safety.

The Markup has learned that not only does the Android version of the contact tracing tool contain a privacy flaw, but when researchers from the privacy analysis firm AppCensus alerted Google to the problem back in February of this year, Google failed to change it. […] “This fix is a one-line thing where you remove a line that logs sensitive information to the system log. It doesn’t impact the program, it doesn’t change how it works, ” said Joel Reardon, co-founder and forensics lead of AppCensus. “It’s such an obvious fix, and I was flabbergasted that it wasn’t seen as that.”

The big rub seems to be between Google’s trust of Android apps and the security researcher who knows that’s a very broken model to rely upon.

Reardon also reached out to Giles Hogben, Android’s director of privacy engineering, on Feb. 19. In an email, Hogben noted, in response to Reardon’s concerns, that the system logs could only be accessed by certain apps.

“[System logs] have not been readable by unprivileged apps (only with READ_LOGS privileged permission) since way before Android 11 (can check exactly when but I think back as far as 4),” Hogben said in his Feb. 25 reply.

Reardon, however, said hundreds of preinstalled apps can still read those system logs. “They’re actually collecting information that would be devastating to the privacy of people who use contact tracing,” he said.

Reading the logs is reading the logs, as we used to say. Reardon is right that a preinstalled app that can read the logs means the data boundary is pierced and thus privacy expectations breached.