Epstein Built Sex Trafficking Operation Inside a Children’s School

NPR reports the exact mechanism by which Jeffrey Epstein and Ghislaine Maxwell set up a trafficking base inside a children’s camp.

Epstein donated over $400,000 to the Interlochen Center for the Arts in Michigan and the largest portion built him a sex trafficking operations center on the campus. The recruitment pipeline is clear:

Ghislaine Maxwell would contact a school administrator. We’d like to come and stay in the cabin. The administrator would say, yes, great. What do you need? And Ghislaine would get back and say, I want — we want these things, and we’re coming. Once they’re there, they really were on their own… This lodge was his base there. And while he was there, he was walking around campus, and he had a little dog with him with Ghislaine Maxwell. And that’s where he ended up meeting two — a 13-year-old and a 14-year-old girl.

Maxwell handled base logistics with an administrator. The donation bought them a whole building inside the security perimeter of a school for children. The dog gave them a pretext to roam the grounds and approach minors. The institution’s own staff facilitated every visit, then disappeared.

Interlochen’s stated policy of course prohibited unsupervised donor-student contact, but the infrastructure they built for Epstein was designed to produce exactly the opposite. At least two teenage girls ended up trapped in Epstein’s orbit for years. One testified at the Maxwell trial about prolonged sexual abuse.

The story is not just about this operating base for trafficking teenage girls into prostitution, it’s now how and why a school held the door open, and who Epstein’s customers were for his offers of sex with 13-year old girls.

Source: Epstein Files

AI Security Researchers Getting Better at Crying Wolf About Privacy

A new Swiss LLM paper about attacking privacy demonstrates only casual pseudonymity, yet researchers want us to be very worried.

We demonstrate that LLMs fundamentally change the picture, enabling fully automated deanonymization attacks that operate on unstructured text at scale

Oooh, so scary.

Someone who posts under an anonymous handle without thinking about correlation risks is vulnerable to a motivated attacker with LLM tooling.

Yeah, that’s as real as someone wearing an orange camo vest in a forest and thinking they can’t be seen. This is not news. I get that most people don’t think enough about privacy in advance of being breached, or maybe are misled by vendors (looking at you “VPN” and WhatsApp). But it’s still just Narayanan and Shmatikov’s original point in 2008 with better automation (updated in 2019). The cost of unmasking anonymous users dropped, yet the capability has NOT fundamentally changed.

Ok, security economics time.

Their whole argument is that LLMs made attacks cheap. Well, active profile poisoning makes defense cheap too. You don’t need to maintain perfect compartmentalization anymore because you just need to inject enough contradictory micro-data that the Reason step can’t converge. A few false location signals, some deliberately inconsistent professional details, and the attacker’s 45% recall at 99% precision collapses because their confidence calibration can’t distinguish genuine matches from poisoned ones.

Boom. Their paper is toast.

Integrity breaches are everything now in security. Poisoning inverts the economics because LLMs are weak at being integrous. Their entire pipeline trusts that what people post about themselves is true, the kind of classic academic mistake that makes a good paper useless in reality.

The paper needed a Section 9 about what happens to their pipeline when even 5% of the candidate pool is actively adversarial.

This is a genuine lack of a threat model in a paper supposedly about threats. It blindly asserts a one-directional information asymmetry: the attacker reasons about the target, but the target never reasons about the attacker. Yet any real operational security practice already operates on the assumption that your adversary has a profiling pipeline. The Tor Project didn’t wait for this paper. Whistleblower protocols at serious newsrooms didn’t wait for this paper. The people who actually need pseudonymity already treat their online personas as adversarial constructions, not their natural expressions.

Their extrapolation is purely speculative. Why? Log-linear projection from 89k to 100M candidates spans three orders of magnitude beyond their data. They fit a line to four points and project it into a range where dynamics could change entirely, because denser candidate pools mean more near-matches, which could degrade the Reason step nonlinearly.

Their multi-model stack totally obscures the mechanism. They use Grok 4.1 Fast for selection, GPT-5.2 for verification, Gemini 3 for extraction, GPT-5-mini for tournament sorting. It’s LLM salad. What’s actually doing the work? If the attack works partly because GPT-5.2 has seen these HN profiles in training data, that’s a memorization attack dressed up as a reasoning capability. They admit they can’t remove spilled ink from their water, an inability to separate reasoning from memorization. That’s a huge problem for the conclusions.

A case study looks exactly like their Anthropic Interviewer results with 9 out of 33 identified, 2 wrong, 22 refusals, manually verified with acknowledged uncertainty. That is not systematic evidence.

I mean this is a paper that claims to be about threats, yet it’s not really about threats at all. It’s about defenders who don’t defend.

Section 3.3 admits their ground truth is from users who voluntarily linked accounts across platforms. They’re studying a sheep population that doesn’t even care if they have a sheepdog or a fence, which is how they ended up with a paper about the theory of a wolf being infinitely unstoppable.

Anthropic Says AI Cracked Encryption! The Key Was in the Lock

Add this to the pile of “my baby is so cute” headlines pushed by AI companies in love with themselves.

Anthropic published a detailed account of Claude Opus 4.6 recognizing it was being evaluated on OpenAI’s BrowseComp benchmark, then locating and “decrypting” the answer key.

Evaluating Opus 4.6 on BrowseComp, we found cases where the model recognized the test, then found and decrypted answers to it—raising questions about eval integrity in web-enabled environments.

Unfortunately, I have to call it out as disinformation. Or what on the wide open Kansas prairie we used to call…

Anthropic narrates a breathless sequence: their model exhausted legitimate research over 30 million tokens, hypothesized it was inside an evaluation, systematically enumerated benchmarks by name, found the source code on GitHub, and wrote its own SHA256-and-XOR decryption functions to extract the answers.

OMG, OMG… wait a minute. Source code?

The word “decrypted” right next to finding source code immediately raised my suspicions.

BrowseComp Terminology Matters

BrowseComp’s encryption is a repeating-key XOR cipher.

That’s not great.

OpenAI’s browsecomp_eval.py implements the entire mechanism in four steps that take just five lines:

  1. SHA256-hash a password string
  2. Repeat the 32-byte digest to match the ciphertext length
  3. base64-decode the ciphertext
  4. XOR byte-by-byte

That is it. That is their cryptographic apparatus.

And… wait for it… the password is the canary string.

Each row in the dataset has the password as one of three fields: the encrypted question, the encrypted answer, and the canary.

When the key is co-located with the ciphertext in the same CSV, that’s the key in the door. It’s not even under the mat, or a rock in the garden. It’s just sitting right there, as if no lock at all.

The CSV is open to the public without authentication, served from openaipublic.blob.core.windows.net. The algorithm is published, under MIT license.

Key-in-Lock Encryption is Not Encrypted

Obfuscation is the right word here. If you have the key, you have decrypted data. I know technically a lock with a key is still a lock. but calling it a challenge to unlock when you have the key to unlock it is very misleading.

It is obfuscation designed to prevent a dumb search-engine, ignoring the key right in front of it, indexing plaintext answers. The canary string’s original purpose was to flag training-data contamination, and not to serve as a cryptographic key.

OpenAI, which is turning out to be one of the worst engineering culture companies in history, repurposed it as one. The result is a data structure where the locked door has the key left sticking out of it to be used like the handle.

Shame on Anthropic

The blustery writeup spends thousands of words on the model’s journey and never once mentions that the decryption key is in the same file as the ciphertext.

This is NOT ok.

Anthropic’s engineers obviously read the source code because they describe the XOR scheme, the SHA256 key derivation, the canary string. They see the mechanism. And yet, somehow, they chose not to admit obfuscation.

Instead, the narrative escalates through a sequence of active verbs: the model “hypothesized,” “identified,” “located,” “decrypted.”

Decrypted? Shame on you.

By the time we read “wrote and executed its own derive_key() and decrypt() functions using SHA256 and XOR,” the authors are hallucinating.

They want us to believe a model reverse-engineered a cryptosystem? What? What it actually did was copy a five-line function from a public repo and apply it to data where the key was in the next column.

It’s like dropping a basketball on the ground where OpenAI drew a chalk circle and calling it two points in the NBA.

The writeup also never explains a very, very important security delta. The encryption failed? Ok, well, then why did the other controls succeed? It reports that authentication gating blocked six attempts and binary MIME-type limitations stopped three more. Somehow encryption is the only thing that cracked.

Nope.

The controls worked when they were real access controls. The encryption didn’t work because it wasn’t real encryption. Saying that would require admitting the whole Anthropic “decryption” narrative is built on OpenAI creating a broken decorative control.

Real Controls Are Real

Anthropic’s own evidence reveals their human hallucinations. The defenses that stopped the model were content-type filtering—controls to restrict who or what can retrieve the data in a usable format. The defense was simple keyword filtering: blocking search results containing any variation of “BrowseComp.”

They abandoned cryptographic framing entirely.

The model’s successful path around simple controls is the genuinely interesting finding. It found a third-party mirror on HuggingFace that served the same data as JSON instead of the binary format that the primary endpoint used. It routed around a MIME-type restriction by finding a leak. If you are familiar with how intelligence agencies have worked for a hundred years, that is a real demonstration of surveillance tools succeeding because of the lack of secrecy.

The Dog That Didn’t Bark

Anthropic didn’t critique OpenAI’s benchmark design anywhere, but I will.

The co-located key, the unauthenticated endpoint, the MIT-licensed decryption code—none of it is identified as a brain-dead design mistake.

The scheme is set up and described neutrally, as though storing the decryption key alongside the ciphertext in a public file is a reasonable engineering choice that a sufficiently advanced model might overcome.

What year is this? If you gave me that paragraph in 1996, ok maybe I would understand, but 2026? WTAF.

Both companies benefit from OpenAI sucking so hard at safety. OpenAI gets to fraudulently say it has a benchmark encrypted. Anthropic gets to fraudulently say its model decrypted…. This is religion, not engineering. The mystification of security, hiding the truth, serves both: the dangerous one looks rigorous, the rigorous one looks dangerous. Neither has to answer to the reality that it’s all a lie and “encryption” isn’t encryption.

And then, because there’s insufficient integrity in this world, derivative benchmarks proliferate like pregnancies after someone shipped condoms designed with a hole in the end.

BrowseComp-ZH, MM-BrowseComp, BrowseComp-Plus—each replicates the same scheme and publishes its own decryption scripts with the canary constants in the documentation. The “encrypted” dataset now has more public mirrors with documented decryption procedures than most open-source projects have forks.

Expect Better

A security control that exists in form but not in function needs to be labeled as such.

Performative security is all around us these days, and we need to call it out. Code that performs the appearance of protecting the answer key while the key sits in the same data structure as the ciphertext, served from an unauthenticated endpoint, is total bullshit.

It is an integrity clown show.

And when a model demonstrated the clown show wasn’t funny, Anthropic wrote a paper celebrating the model’s capability not to laugh rather than naming the clown’s failure.

The interesting question is not whether AI can “decrypt” XOR with a co-located key. I think we passed that point sometime in 2017. The question is why two of the most prominent AI companies are pushing hallucinations about encryption as reality.

Google Announces Colonial-Era Compensation Plan for Human Devaluation

Alphabet just filed an SEC disclosure awarding CEO Sundar Pichai a compensation package worth up to $692 million over three years. His base salary stays at $2 million. The rest is equity pay, which means performance stock units tied to Alphabet’s total shareholder return relative to the S&P 100, plus new incentive units tied to the board’s own valuation estimates of Waymo and Wing.

No operational milestones are specified.

No headcount targets.

No product quality benchmarks.

The company declined to comment on what Pichai actually has to do, but as someone with decades of being inside, I can explain.

Here’s the decoder ring.

How CEO equity is setup

Total shareholder return goes up either from increased revenue, or from decreased spend. Remember “spend more to make more” as a cry for market growth? Well, that’s long gone as the new Big Tech cry is “people are our largest cost, so get rid of them”. Mass layoffs now mechanically improve a metric that triggers a big payout to the five or six men playing this game.

Nobody says “fire people” in the game rules for an incentive structure, because a “decrease spend” dog whistle is so loud and clear.

The Waymo and Wing units disclosed are arguably signs of something even worse. Their value is determined by the compensation committee’s self-estimate of per-unit worth.

Not revenue.

Not ridership.

Not delivery volume.

Nothing connected to quality of life or value relevant to anyone affected by the product.

A Waymo that displaces humans and reduces jobs faster is worth more, is a good way to understand the “worth” being written. A Wing that removes human jobs from last-mile delivery is worth more than one that allows humans to “cost” by existing.

The global management class has a specific training pipeline designed to separate the person making the decision from the people affected by it. That’s the colonial inheritance running through Cambridge and Wharton and McKinsey. The entire incentive structure points toward massive displacement and removal of humans.

“A really big spreadsheet and a baby are morally equivalent.” The $692 million pay package is what happens when boards agree.

The decoder

What the filing says What it means
“Performance-based equity” Stock price goes up when layoff numbers go up
“Total shareholder return” Earnings per share, which layoffs improve
“Per-unit value of Waymo” Board’s own estimate, no external audit
“Scaling Other Bets” Replacing human labor with autonomous systems
“Best interests of stakeholders” Not employees — shareholders
“Base salary unchanged at $2M” The part that sounds modest; the other $690M is equity

For context: Microsoft’s Satya Nadella earned $96.5 million in fiscal 2025. Apple’s Tim Cook took home $74.3 million. Pichai’s package is seven times Cook’s.

Meanwhile, the same tech sector reports record-high unemployment among software engineers, with companies citing “efficiency” and “AI-driven productivity” as the reasons they don’t need to rehire.

The scoreboard

Between 2022 and 2025, Big Tech announced over 100,000 layoffs across Alphabet, Microsoft, Meta, and Amazon alone. Here’s what happened to CEO compensation over the same period.

Company Jobs cut (2022–2025) CEO pay trend
Alphabet (Pichai) 12,000+ $226M (2022) → $692M/3yr (2026). Triennial equity grant tripled.
Microsoft (Nadella) 25,000+ $54.9M (2022) → $96.5M (2025). Record high. +76%.
Meta (Zuckerberg) 21,000 $1 salary. Net worth: $55B (2022) → $200B+ (2025). Operating margin doubled to 41% after cuts.
Amazon (Jassy) 27,000+ $1.3M (2022) → $40.1M realized (2024). Ten-year equity grant vesting accelerated by stock surge post-layoffs.
Apple (Cook) ~600 $99.4M (2022) → $74.6M (2024). The one CEO who took a voluntary cut — also the one who barely laid anyone off.

The Apple line tells you everything. The only CEO whose compensation went down is the one who didn’t convert headcount into margin improvement. The market doesn’t reward restraint. It rewards extraction.

Is one CEO worth 1,500 engineers?

Pichai’s package: $692 million over three years. Cook’s rate over the same period: $224 million. The gap between the two is $468 million.

Alphabet’s own proxy filing reports median employee total compensation at $331,894. Levels.fyi puts the median Google software engineer at $322,000, a senior L5 at $417,000, and an entry-level L3 at $201,000.

If Pichai were paid at Cook’s rate, the $468 million difference would hire:

Role Google comp Hires Market comp Hires
Entry-level (L3) $201K 2,329 $150K 3,120
Median SWE $322K 1,454 $200K 2,340
Senior engineer (L5) $417K 1,123 $250K 1,872

The left column uses Google’s own inflated compensation rates. The right column uses normal market rates — and the numbers get dramatically worse for the board’s argument.

Google’s comp inflation is part of the same cycle. For years Big Tech bid up engineering salaries to hoard talent and starve competitors. A $417,000 L5 at Google could have been two senior engineers at a normal company. Then Google dumped 12,000 of those overpriced engineers onto a market it had already distorted, cratering the value of the skills it spent a decade inflating.

The hoarding and the layoff are the same strategy in two phases: acquire to block competitors, discard.

At market rates, the full $692 million as one single person’s compensation package would employ 4,613 entry-level engineers or 2,768 senior engineers for three years. That’s not a few, that’s nearly half of the 12,000 people Pichai fired in January 2023.

Alphabet’s board looked at thousands of engineers and one CEO and signaled the opposite of production. They chose to pay one person the amount it would cost to employ people to produce things. That’s a rent-seeking decision, filed with the SEC, that says the company believes its staff value is an inverted pyramid; value is capture using a position of control, not from making anything.

Meanwhile the engineers who built Search, Chrome, Android, Maps, YouTube, Gmail, Cloud, and TensorFlow are sending applications into a market that Alphabet itself helped flood with 12,000 newly unemployed engineers. Pichai gets $692 million for presiding over the company those engineers made. The engineers get a job market that tells them their skills are worth less every quarter because the CEOs firing them are worth more.

Enclosure

The $692 million package and the tech unemployment headlines have a troubling connection. They’re the same balance sheet. Workers built the platforms over twenty years. The platforms replace the workers. The CEO gets paid for presiding over the conversion.

Thompson’s The Making of the English Working Class documents exactly this sequence: workers build value in a shared system, the system gets captured, the workers get expelled, and the capturers profit from both the thing that was built and the newly desperate labor supply. Google engineers built the commons — Search, Android, and TensorFlow, much of it literally open source. The commons got enclosed. The engineers got expelled. Acemoglu and Robinson call the result an extractive institution: one designed to concentrate wealth rather than distribute it. A compensation committee that awards $692 million to one person while eliminating 12,000 positions is that institution. The SEC filing is its constitution.

The colonial version of this cycle is the closest parallel. Big Tech extracts labor from engineers, converts it into platform value, then sells AI services back to the same companies that can no longer afford to hire the engineers Big Tech overvalued before discarding. The “AI-driven productivity” pitch to enterprises is selling automated labor back to a market these companies emptied of human labor. Extract the resource, process it, sell the product back to the territory you stripped. That’s not a metaphor. That’s the business model, filed quarterly with the SEC, described in the language of shareholder value.

That’s what these CEOs of Big Tech represent now. An incredibly cynical compensation design that treats human lives as throwaway; colonial material for rapid extraction and disposal.