Palantir Staff Access UK Identifiable Patient Data

Apparently winning against no competition has led to even worse things.

Palantir was awarded the FDP contract after winning a succession of pandemic-era deals, worth a combined £60 million, without competition.

How bad? Here’s the no competition winner readout.

Under previously agreed rules, Palantir staff working on the FDP could only access the National Data Integration Tenant (NDIT), a data repository for patient data before it is transferred to the “pseudonymized” analytics system, if they apply to access for specific data sets.

A document released by NHS England says that Palantir staff can get a new “admin” role and access the NDIT and its identifiable patient data. Other consultants working on the FDP will get similar access.

The briefing document, seen by the FT and confirmed by The Register, said granting access to the data to Palantir staff and others could “risk of loss of public confidence” in its assurances about “safeguarding patient data and ensuring appropriate use and access to it.”

What sits inside the NDIT? The medical history of roughly fifty million people, that’s all. And so you might ask why is the body certifying appropriate use doing the opposite, giving away the access it was meant to constrain?

Well, first of all, it’s shocking the UK has anything to do with Palantir. The company posted a manifesto, line-by-line Nazi propaganda, calling for the postwar denazification of Germany to be undone. The position requires treating Allied victory as an overcorrection. Yeah, Palantir is openly saying Hitler should have won and England should be speaking German today. Their CEO boasts that he spends much of his time talking with, in his own words, real Nazis. We can assume he means Peter Thiel.

There’s no reason for anyone to have any confidence in Palantir. None. Their software is a mess of moats to trap customers, as they peddle fear to amass data and seize control over entire nations. Consider that the pandemic procurement that placed Palantir inside the NHS was billed as an emergency, just so they could suspended competition. Palantir hates competition. The suspension allowed expansion of an operational footprint from which the FDP bid was later scored. The FDP contract then supplied the platform inside which a new admin role could be defined. Each authorization reduced public protection from Palantir.

Second, anyone who read Thiel’s bio, childhood in a German enclave with continuing Nazi sympathies into the 1980s, the father’s career building uranium infrastructure for apartheid’s clandestine nuclear weapons program, saw this coming. Public confidence in Palantir shouldn’t be discussed like losing it would change anything. The UK government rammed the most toxic, least transparent, company into their health system without any justification. Officials wrote into the procurement file that the public would object if the public knew, because Palantir, and authorized the harmful state anyway.

Third, the original restriction was a real thing that was meant to be defended instead of handed over to a company whose published manifesto endorses Hitler winning, or at least reversing the postwar denazification of Germany to make Nazism great again. Pseudonymization inside the NDIT existed because access to identifiable patient records is a category Parliament treats differently from access to aggregated statistics. The whole public case for the FDP rested on that restriction. The restriction is gone.

Chinese Spy Caught as Mayor: LA Suburb Loses its Leader

The Attorney General’s office in California has released a statement about encrypted messages sent via WeChat, used to file charges against a mayor.

For example, in June 2021, a PRC official contacted Wang and other individuals via the WeChat encrypted messaging application with pre-written news articles, including a PRC official-written essay in the Los Angeles Times that stated: “China’s Stance on the Xinjiang Issue – There is no genocide in Xinjiang; there is no such thing as ‘forced labor’ in any production activity, including cotton production. Spreading such rumor to do defame China, destroy Xinjiang’s safety and stability, weaken local economy, suppress China’s development[.]”

Minutes later, Wang posted the article on her own website and responded to the PRC official with a link to the article on her website. The others in the group chat did the same. The PRC official responded: “So fast, thank you everyone.”

In August 2021, Wang and three other members of the same group chat shared links to the same article on their respective “news” websites, after which the PRC official thanked them for their “reporting.” At the PRC official’s request, Wang made edits to the article, sent the official a link to the article reflecting the requested change, then sent the official a screenshot showing the article had been viewed 15,128 times. In response, the official messaged, “Great!,” Wang replied, “Thank you leader.”

Mark of the Prompt: Google Threat Intelligence Group (GTIG) AI Report on Vulnerability Exploitation

1960 protest against Otto Preminger’s hiring of blacklisted screenwriter Dalton Trumbo. The picketers identified threat by association, not by conduct. Ask yourself if you recognize the GTIG tactic.
I was happily reading through a new Google post called “GTIG AI Threat Tracker: Adversaries Leverage AI for Vulnerability Exploitation, Augmented Operations, and Initial Access” (a title almost as long as a post itself) when my eyes crashed into this box.

Now hold on, pardner. Attribution is the evidence? That’s not how anything is supposed to work. This prompt gets attributed to UNC2814 with a target of TP-Link firmware and Odette File Transfer Protocol implementations. Those are legitimate research areas. Bug hunters audit TP-Link firmware constantly. OFTP analysis appears in academic and industry venues. That prompt content matches the real work. Dual-use isn’t really presented as it should be here.

I mean to say that the classification being applied by the post rests alone on attribution, and NOT on the content. To call this jailbreaking, GTIG would need to show that Gemini refuses the same prompt absent the framing. The report omits that demonstration. The argument runs in a circle. If a Mandiant analyst typed the prompt, it would not be flagged. If a TP-Link PSIRT engineer typed it, not flagged. The label applies only because Google says it knows the person asking wears a UNC2814 badge to work. How? Do they look too Chinese? Are they wearing an Alibaba hat? The persona claim itself, “I am a network security expert auditing for pre-auth RCE,” still may be entirely accurate. State-aligned operators are often skilled security researchers with different employers.

The report therefore is a huge let down because it does not show what Gemini would have refused absent the framing. No baseline refusal is demonstrated. The “jailbreaking” claim is asserted. A model that refuses to discuss embedded device auditing with a self-identified security researcher is broken, and using context-setting to get useful answers is not jailbreaking but normal interaction with a system designed to calibrate to the asker.

The Wooyun example also makes this evident. The “more sophisticated” approach involves a Claude skill plugin that was built around 85,000 documented vulnerability cases from a defunct Chinese bug bounty platform. That is a knowledge base. Calling its use “in-context learning to steer the model” describes how skills work. The same architecture is how we build defensive tooling. The threat label is like “mark”, which labels and tracks the actor, not the technique.

The report’s headline finding seems to diverge from what I ended up reading. The executive summary opens with this claim: “For the first time, GTIG has identified a threat actor using a zero-day exploit that we believe was developed with AI.

Ok, I get it, that “we believe.” GTIG admits Gemini was not used. The attribution to AI rests on forensic judgment of code style. Educational docstrings, a hallucinated CVSS score, textbook Pythonic format. These are aesthetic tells, and so I’m listening. But they show someone formatted the output cleanly.

They do NOT rise up to show AI did the work.

The vulnerability itself was a 2FA bypass requiring valid credentials, based on a hardcoded trust assumption. Seriously. This is bread and butter stuff of any authentication code review on a day that ends in “y”. The report even admits fuzzers and static analyzers miss the category, which means humans have always been the ones finding it. I’m open to considering a LLM is helping humans work faster, but claims that discovery is all new because an LLM may have formatted the writeup? No, that’s an artifact bump, like a typewriter producing cleaner manuscripts than a pen. That’s not the actual work of writing.

And of course exploit researchers find exploits with tools. What else would we expect, potatoes? Vulnerability researchers have always reached for force multipliers. Fuzzers, symbolic execution, decompilers, taint analysis. AI joins a long catalog. Even if you are saying the hammer is being replaced by the nail gun, continuity is the story. The discontinuity is shrill and misleading.

The pattern within the Google register is unfortunately also a page out of history. McCarthyism anyone? How did that work out?

Let me take a moment to remind you what Google sounds like right now. Oppenheimer’s hearing was about a working professional doing the work he was hired to do, stripped of clearance because of attributed associations rather than any conduct. It literally classified his professional inquiries as suspect based on who he was assumed to be aligned with. And that 1954 hearing was formally vacated by DOE in December 2022. When will all the people being accused within closed door meetings at Google get their vacation?

Cold War threat reporting ran on the same closed door surface-level analysis, judge-by-the-cover logic. Good guys doing surveillance meant “intelligence collection” performed by allies while it was always “espionage” performed by adversaries. Overthrowing a government was “stabilization” abroad yet “subversion” at home. The vocabulary was used to project an alignment, which is why everyone should be forced to study at least basic disinformation history before stepping into a security role that spreads disinformation.

GTIG needs the jailbreak frame because the alternative is too uncomfortable. The alternative is that frontier models are doing exactly what they are built to do, and competent security work is competent security work regardless of nationality.

The defender-attacker asymmetry many vendors claim does not hold at the prompt level. Google having a team of experts to call routine professional prompting “a simple form of prompt injection” preserves the asymmetry with rhetoric, without demonstrating it technically.

Look also at where the report describes APT45 “sending thousands of repetitive prompts that recursively analyze different CVEs and validate PoC exploits.” I have news for you. That is a description of automated vulnerability research at scale. American firms love to market the identical capability as a product feature, but seem to miss the obvious similarities because they don’t believe they have the “mark”. Big Sleep, mentioned in the same report, is Google’s version.

This reminds me of a grocery store I was in the other day. A young blonde boy kept telling the checkout worker that it was someone else who did a bad thing. Next to him was a man with the same blonde hair reinforcing the boy’s statement. What were they saying? “It can’t be me/him because the person who did the bad thing had dark hair”. Dark hair, dark hair, they kept saying over and over again. Bad thing? Dark hair. At no point did they say anything other than dark hair to identify a real bad guy. “Can’t be me, I don’t have dark hair”.

Ok Google, we see what you’re saying. But do you see what you’re saying? It’s a false narrative.