Category Archives: Security

Facebook’s Meta Algorithms Harming Children by Pushing Toxic Content

A new report reveals how easy it is to prove “Meta” business logic intentionally harms children

“One of the things I was struck by was how profoundly easy it was to identify this pro–eating-disorder bubble,” said Rys Farthing, data policy director at the advocacy group Reset Australia and the leader of the research.

Farthing said that exposure to the content was primarily driven by Instagram’s suggestions about which users to follow. Test accounts that expressed an interest in weight loss or disordered eating were quickly flooded with recommendations from the platform to follow other users with these interests, including those that openly encourage disordered eating.

The most telling part is two-fold. First, Facebook tries to undermine trust in journalists, as I’ve written about before here. Their official response cited in the article is to allege that people reporting on harm to children don’t understand what it’s really like to be inside Facebook trying to profit on harm to children.

Second, the researcher here in fact says the exact opposite of what’s being alleged — he’s in the business of putting himself out of business. He understands exactly why Facebook’s business model is so toxic.

Researchers, journalists, and advocates have been raising alarms about disordered eating content on Instagram for years, culminating in fall 2021 when internal Facebook documents provided by whistleblower Frances Haugen showed that Instagram led teen girls to feel worse about their bodies. This new report shows that Meta’s struggles to curb this kind of harm are still ongoing.

But Farthing and others hope change may be around the corner: US Sens. Richard Blumenthal and Marsha Blackburn recently introduced the Kids Online Safety Act, which would create a duty for platforms to “act in the best interests of a minor” using their services. The California legislature is considering a similar provision, modeled after the UK’s Age Appropriate Design Code, that would require companies to consider children’s “best interests” when building or modifying their algorithms.

“If we can muster the courage to actually hold tech companies to account, we could get some of this legislation through,” Farthing said. “And maybe when we have this conversation next year, I might actually have put myself out of business.”

Think hard about that contrast in integrity of work.

Then think hard about the fact that Facebook has attracted far more illegal child sexual abuse images than any other platform — last year alone nearly 30 million reports.

As a platform claiming to be advanced, they instead are using obviously outdated methods and unethical practices that only invite more abuse and harm. Other tech companies take the exact opposite approach from Facebook because any images they are unsure about are reported to be investigated further, putting society and safety first over profits.

For example Facebook is known to be classifying millions of abused children as adults because they see it as a loophole to avoid the cost of protection — treat a 13-year-old as “fully-developed” to lower reporting levels. Facebook moderators have literally complained they have been pressured to “bump up” children to adult class or face negative performance reviews. This nonetheless backfires since it represents an invitation for child abusers to flock onto the platform, increasing levels of abuse images to exploit what seems to be ongoing willful ignorance and toxicity of Facebook management.

Subtle Tweak to AI Blows Up Missile Accuracy Test

This article saying the USAF is concerned about narrow definitions of success is a great read.

In a recent test, an experimental target recognition program performed well when all of the conditions were perfect, but a subtle tweak sent its performance into a dramatic nosedive,

Maj. Gen. Daniel Simpson, assistant deputy chief of staff for intelligence, surveillance, and reconnaissance, said on Monday.

Initially, the AI was fed data from a sensor that looked for a single surface-to-surface missile at an oblique angle, Simpson said. Then it was fed data from another sensor that looked for multiple missiles at a near-vertical angle.

“What a surprise: the algorithm did not perform well. It actually was accurate maybe about 25 percent of the time,” he said.

It reminds me of 1960s IGLOO WHITE accuracy reports, let alone smart bombs of the Korean War, and how poorly general success criteria were defined (e.g. McNamara’s views on AI and the Fog of War).

Twitter as Bully Pulpit: “political right enjoys higher amplification compared to the political left”

A report from December 2021 factors into the reasons political right groups pushed Elon Musk to take over Twitter and privatize it to help them silence the political left:

Politicians and commentators from all sides allege that Twitter’s algorithms amplify their opponents’ voices, or silence theirs. Policy makers and researchers have thus called for increased transparency on how algorithms influence exposure to political content on the platform. Based on a massive-scale experiment involving millions of Twitter users, a fine-grained analysis of political parties in seven countries, and 6.2 million news articles shared in the United States, this study carries out the most comprehensive audit of an algorithmic recommender system and its effects on political content. Results unveil that the political right enjoys higher amplification compared to the political left.

In other words, a political right is operating on zero sum communication theory. When someone who disagrees with them is able to speak, the political right believes their own rights are being suppressed. Civil rights for all humans is treated as a loss to white nationalism, which rejects the idea of others having equality to them.

This helps explain the weird phenomenon where the political right get more leeway on Twitter than others to spread harm at the same time complaining that they are being over-censored. They want opponents silenced and until that happens they will argue that they are being censored by the very presence of any opposition.

If Martin Luther King were on Twitter today the political right would surely argue that until his account is completely silenced they will not be satisfied, much in the same way Tesla has told its black staff to “be more thick skinned” about abuse at work or quit.

Microsoft: Ukraine and U.S. Were the Most Targeted Countries 2020-2021

Microsoft’s new report out on the very loud computer attacks during the Ukraine War highlight how the U.S. has been a target of aggression.

Source: Microsoft

The chart above represents the geographic distribution of customers notified of all nation state threat activity, not just Russian, between July 1, 2020, and June 30, 2021. By June 2021, Ukraine was the second-most impacted country we observed, reflecting 19% of all notifications of nation-state threat activity that we provided to customers during that time, largely due to the ramp up of Russian activity.

When it says “by June 2021” this is in context of Russian invasion of Ukraine starting 24th February 2022 (a year later) — Internet-based attacks are linked all the way back to at least March of the prior year.

Cyber attacks may have been long-prepared ahead of time yet also came in secondary or even tertiary waves behind kinetic attempts at damage. For example Ukrtelecom fought off Russian cyber attacks at the end of March 2022. At the start of that same month a major communications tower in a civilian area of Kyiv had been fired upon with missiles.

Government and information technology services were flagged as the most targeted during Russia-aligned network intrusions or destructive attacks, although Microsoft did a weird thing by grouping Ukrainian finance, defense, transportation and more into an “Other” category… while listing Internet and defense as separate too.

Source: Microsoft

Microsoft believes only a half-dozen Russian government (e.g. “sponsored”) groups launched more than two-hundred attacks against Ukraine. Thirty-seven were classified as destructive, however less than half of those were in a broad category of critical infrastructure.

More than 40% of the destructive attacks were aimed at organizations in critical infrastructure sectors that could have negative second-order effects on the government, military, economy, and people. Thirty-two percent of destructive incidents affected Ukrainian government organizations at the national, regional, and city levels.

Microsoft’s report doesn’t mention levels of capability. Elsewhere they’ve said things to the press like Russia “brought all their best actors to focus on this” without providing any real scale to measure against.

Russian attacks have been plagued with incompetence whether land, sea or air so it’s hard to tell if Microsoft is laughing at the technical ability of Russia or trying to be serious about a telco having a one-day 20% drop in service being as bad as it gets.

In other words, when people say things like Ukraine is doing better than expected, it’s probably more accurate to say Russian ability is more overblown than even a Tesla — riddled with fraud and in-fighting, a dumpster fire of the “strong man” myth (e.g. paper bear).