A new report reveals how easy it is to prove “Meta” business logic intentionally harms children
“One of the things I was struck by was how profoundly easy it was to identify this pro–eating-disorder bubble,” said Rys Farthing, data policy director at the advocacy group Reset Australia and the leader of the research.
Farthing said that exposure to the content was primarily driven by Instagram’s suggestions about which users to follow. Test accounts that expressed an interest in weight loss or disordered eating were quickly flooded with recommendations from the platform to follow other users with these interests, including those that openly encourage disordered eating.
The most telling part is two-fold. First, Facebook tries to undermine trust in journalists, as I’ve written about before here. Their official response cited in the article is to allege that people reporting on harm to children don’t understand what it’s really like to be inside Facebook trying to profit on harm to children.
Second, the researcher here in fact says the exact opposite of what’s being alleged — he’s in the business of putting himself out of business. He understands exactly why Facebook’s business model is so toxic.
Researchers, journalists, and advocates have been raising alarms about disordered eating content on Instagram for years, culminating in fall 2021 when internal Facebook documents provided by whistleblower Frances Haugen showed that Instagram led teen girls to feel worse about their bodies. This new report shows that Meta’s struggles to curb this kind of harm are still ongoing.
But Farthing and others hope change may be around the corner: US Sens. Richard Blumenthal and Marsha Blackburn recently introduced the Kids Online Safety Act, which would create a duty for platforms to “act in the best interests of a minor” using their services. The California legislature is considering a similar provision, modeled after the UK’s Age Appropriate Design Code, that would require companies to consider children’s “best interests” when building or modifying their algorithms.
“If we can muster the courage to actually hold tech companies to account, we could get some of this legislation through,” Farthing said. “And maybe when we have this conversation next year, I might actually have put myself out of business.”
Think hard about that contrast in integrity of work.
Then think hard about the fact that Facebook has attracted far more illegal child sexual abuse images than any other platform — last year alone nearly 30 million reports.
As a platform claiming to be advanced, they instead are using obviously outdated methods and unethical practices that only invite more abuse and harm. Other tech companies take the exact opposite approach from Facebook because any images they are unsure about are reported to be investigated further, putting society and safety first over profits.
For example Facebook is known to be classifying millions of abused children as adults because they see it as a loophole to avoid the cost of protection — treat a 13-year-old as “fully-developed” to lower reporting levels. Facebook moderators have literally complained they have been pressured to “bump up” children to adult class or face negative performance reviews. This nonetheless backfires since it represents an invitation for child abusers to flock onto the platform, increasing levels of abuse images to exploit what seems to be ongoing willful ignorance and toxicity of Facebook management.