CA Tesla Kills Two in Head-on Collision

The CHP report suggests the Tesla locked onto the other car headed directly at it

According to the investigation, 30-year-old Shawn Borecha was driving a white 2021 Tesla Model 3 southbound on Highway 145 at an undetermined rate of speed… entering the northbound lane….

In an attempt to avoid a head-on collision, the Hyundai’s driver veered into the southbound lane, but Borecha simultaneously returned to the southbound lane, resulting in a head-on collision.

JD Vance Announces His 2028 Campaign of Unity: Antisemitism

Axios published a real piece of work this week on Vice President JD Vance’s 2028 strategy. The sourcing tells you everything: “Vance aides,” “outside Vance allies,” “Republicans close to Vance,” “person familiar with Vance’s thinking.”

That’s not journalism. That’s dictatorship.

The headline discovery is how Vance plans to be a “voice of unity” against democracy. He says he will delay his “purity tests” until after he is in power. He says he will stay above the fray to take power while others do the dirty work for him.

This was published after Vance spoke at Turning Point USA’s AmericaFest, where he was asked to draw a line against antisemitism in the Republican coalition. His answer sounded like both an endorsement of antisemitism and a quote out of Mein Kampf:

When I say that I’m going to fight alongside of you, I mean all of you — each and every one.

All of you.

The rhetorical move Vance makes here is using the language of liberation to describe extermination. Freedom as the label for the death camp.

This Nazi phrase of human extraction was posted to “labor camps” where prisoners were worked to death, to the tune of “Arbeit macht frei durch Krematorium Nummer drei.” (Work sets you free through Crematorium Number Three)

“Unity” as the label for welcoming antisemites. “Inclusivity” as the label for a coalition that includes those who want Jews eliminated. “All of you” when “all of you” means no liberals, Nazis are welcome.

The inversion is the technique. Use the opposite word. Call the thing by what it destroys.

“All of you — each and every one” SOUNDS inclusive. Yet when the question is whether to exclude Holocaust deniers, and the answer is no, then “inclusion” becomes the mechanism for targeting Jews. Welcoming the exterminators is expelling their targets.

That’s military intelligence doctrine, disinformation 101. And that’s the documented Nazi rhetorical method: weaponize the language of the thing you’re killing.

Historians refer to this as the Volksgemeinschaft playbook of Hitler. Inclusive language built his coalition, defined by who it targeted. Hitler campaigned on unity, like Vance, defining a People’s Community by targeting groups for extermination.

Vance’s “Unity” is Divisive

At that same AmericaFest, Ben Shapiro warned that the conservative movement was “in serious danger” from figures trafficking in “conspiracism and antisemitism.” Steve Bannon responded by calling Shapiro “a cancer.” The weekend featured open warfare over whether to embrace or exclude Nick Fuentes, a Holocaust denier and obvious Nazi celebrating Nazism.

Vance’s position: diversity to the max. No exclusions. Unity as disunity.

When a Telegram chat surfaced showing Republican elected officials invoking Hitler and using racist slurs, Vance said it was welcome. The same man who earlier said liberal college group chats needed to be cleansed and regulated, suddenly normalized hate speech as acceptable like “anything said in a college group chat.”

Rep. Don Bacon, a Republican from Nebraska, was direct about Vance failing to call out the Nazism of the GOP:

I’ll never vote for someone who is ambiguous in their stance against antisemitism…

Axios reported none of this.

They were too busy with access to a “five-pronged plan” of antisemitism to transcribe eagerly.

Vance’s Hitler Thing

Vance texted his 2016 Yale roommate Josh McLaurin:

I go back and forth between thinking Trump is a cynical asshole like Nixon who wouldn’t be that bad (and might even prove useful) or that he’s America’s Hitler.

Read that again.

Vance did not present good versus bad. He said Nixon was useful if not ideal, but hey, imagine America’s Hitler. More ideal? Vance is offering both options as on the table. Both are outcomes he can work with, and perhaps, arguably he even is leaning more towards Hitler.

The standard narrative is that Vance “evolved” or sold out for power. But the real pattern is actually easy to see:

2016: Floats “America’s Hitler” as acceptable.

2025: Won’t condemn people who actually invoke Hitler, and uses “All of you — each and every one” to explicitly make room for Nazis running America. Or as Fuentes put it recently, 40% of the White House already are Nazis.

Vance didn’t abandon a position to land on antisemitism as his voice. He found his voice could be louder among his people.

Coin-operated Vance

The transformation came with a price tag. Peter Thiel, the infamous tech billionaire who preaches ACTS 17 flavors of Nazism and believes democracy and freedom are incompatible, spent $15 million on Vance’s 2022 Senate race.

That’s the largest single-donor contribution to a Senate candidate ever recorded. Thiel’s $15 million wasn’t really campaign funding. It was an anti-democratic installation of an ideological product in government as part of a documented territorial sovereignty project (Nazi Lebensraum) that Thiel has relentlessly pursued through charter cities, defense contracts, and now the Vice Presidency.

Before the Vance installation came the trials: Thiel hired Vance at Mithril Capital in 2016. Then came the venture fund: Thiel backed Vance’s firm Narya Capital with roughly $100 million. Then came the introduction: Thiel personally brought Vance to Mar-a-Lago in February 2021 to connect with the man Vance had praised as America’s Hitler.

By then, Vance deleted what were being discussed as a pattern of divisive and critical tweets. The product was being recast and made ready for market.

The “evolution” wasn’t a change of heart. It was simply a Thiel project to refine Nazism into a coin-operated breach of American politics. David Duke used to be unelectable, as his campaign manager proved in 1996 with an “America First” platform.

Ralph Forbes campaigning in preacher garb for the American Nazi Party, before becoming the official America First candidate for President in 1996

Axios Complicit in the Antisemitism Campaign

Susie Wiles, Trump’s own chief of staff, acknowledged to Vanity Fair that Vance’s pro-Trump conversion appeared “politically expedient.”

Trump’s chief of staff said that. On the record.

Axios had access to the same information. They chose to publish this instead:

Vance aides say he’s focused on next November’s midterms, not thinking about 2028.

This is journalism failing to journal. Axios believes it needs “people familiar with his thinking” to return their calls. Axios believes real coverage ends the relationship. So they print a one-sided press release, format the “five-pronged plan,” and move on like they did something more than repeat lies.

The actual story was right in front of them. The Vice President of the United States, at a major conservative gathering, refused to draw any line against antisemitism in his coalition. The Vice President literally just said his version of unity is hate, is white supremacy, is where the “all” and “everyone” means a very particular form of unification.

He did this publicly, on stage, the same weekend Axios was taking notes from his allies about his “unity” message.

They had the story. They chose access to the antisemitism.

Real Unity

Vance’s “unity” isn’t despite the antisemitism. The antisemitism is the unity. It’s what holds the Vance coalition together with the explicit promise that there will be no exclusions to stop hate, no tests for safety. Holocaust deniers are now explicitly welcome. Hitler-invokers are explicitly welcome. January 6 rioters? All of them invited, each and every one, to attack diversity.

That’s the 2028 campaign. Axios just repeated it for you without questioning it.

They printed antisemitism of the Vance campaign for 2028 as his “voice of unity”.

Cryptographic Provenance of C2PA Ain’t Gonna Stop Deepfakes

Fortune just quoted ex-Palantir New York Assemblymember Alex Bores on deepfakes. He says fake faces made by AI are “a solvable problem” using the Coalition for Content Provenance and Authenticity (C2PA) standard that cryptographically signs media files with tamper-evident credentials.

Bores’ cup is half full. That’s not a real solution, though. The half he’s missing is the half that matters most in deepfakes.

Abusing an HTTPS Analogy

Bores lazily compares C2PA adoption to the shift from HTTP to HTTPS in the 1990s:

It’d be like going to your banking website and only loading HTTP, right? You would instantly be suspect.

The analogy is instructive, and not how he intends. HTTPS works in very discrete ways:

  1. A centralized trust hierarchy (certificate authorities) decides who gets certificates preloaded and trusted
  2. The check is simply valid certificate or not, meaning a truly binary test (setting aside algorithms)
  3. Browsers now enforce it automatically, meaning users no longer choose
  4. The failure mode has been made abundantly clear: no glowing padlock, no transaction

For C2PA to work the same way, platforms would need to refuse to display unsigned content. That breaks the web, ending up the opposite of HTTPS, because it rises up to a digital rights management issue far above the infrastructure. It also creates a two-tier system where institutional media gets a pay-to-speak trust signal and everyone else gets suspicion and cancelled by default.

Even Bores acknowledges the adoption problem:

The challenge is the creator has to attach it.

Like, self-signed?

HTTPS succeeded because servers had to implement or get regulated out of existence. That’s like saying gas pipes can’t leak, rather than the gas has to be quality content. CardSystems was crushed for failures to stop leaks. PCI DSS dropped a hammer on all payment card transactions everywhere if they were leaky. HTTPS was mandated on the transit path with an absolute ban threat.

In 2009 Google called me into their offices and begged for help continuing use of broken and insecure SSL, because they thought I could lobby for them to stop PCI mandating strong HTTPS. They liked leaky pipes. Talk about regulatory authority forcing innovation. Google lost that argument, big time. And I certainly didn’t take their money for dirty work. I told Google to stop being a jerk and help PCI help them protect their own users. There’s simply no equivalent forcing function like this for individual content creators.

Verification Isn’t Perception: Age of the Integrity Breach

The deep problem is that C2PA solves for the wrong layer. Cryptographic provenance answers whether content is signed by a trusted source. It does nothing for the integrity of a viewer, whether they accurately perceive what they see.

As I wrote yesterday, the people who are most vulnerable to synthetic face manipulation are those with the least levels of perceptual training. Beware the isolated communities behind in the “face-space” race, who never developed the dimensions to distinguish unfamiliar groups.

They can’t detect fakes because they never learned to see the reals.

And a C2PA warning label ain’t gonna fix that.

  • Labeled fakes still cause harm. Bores himself notes that “even a clearly labeled fake can have real-world consequences” for deepfake porn victims. The label doesn’t undo the perceptual and emotional damage.
  • Signed content can still deceive. Authentic footage, cryptographically verified, can be selectively edited, deceptively framed, or presented without context. C2PA tells you the file wasn’t tampered with. It doesn’t tell you whether the framing is honest.
  • The viewer still has to see. If you can’t distinguish faces from unfamiliar ethnic groups, you can’t evaluate whether signed footage of “protestors” or “terrorists” or “immigrants” actually shows what the caption claims.

A Tale of Two Problems

Bores is right about the obvious stuff, infrastructure matters. C2PA still should be the default in devices like cameras and even phones. Platforms should surface provenance data. Institutions should require it for evidentiary contexts.

But infrastructure solves an institutional problem related to journalism, courts, banking, and official communications. It doesn’t solve for the human problem.

The cup is half empty because the human problem is perceptual poverty. The solution isn’t going to be cryptographic. It’s exposure – structured, high-volume exposure that builds out the perceptual dimensions people need to see what they’re looking at.

C2PA answers: “Should I trust a source?”

Perceptual training answers: “Can I see what’s actually in front of me?”

Both questions matter, yet Bores is only asking the far less important one.

Can You Spot AI? The Redneck Problem in Synthetic Face Detection

When an outsider gets off a plane in Nepal for the first time, all the faces in the airport crowd blur together. A month later, they see Tibetans, Indians, Chinese, Nepalese. Mountain faces, valley faces. Nobody teaches the outsider what to look for. They just experience exposure and the human perceptual system builds the categories.

When a mountain village Maoist teenager points an AK-47 at that outsider, the out-of-place hostile appearance becomes obvious, yet is identified far too late. The outsider arrived with a collapsed face-space for South Asians. A month later, the axes develop to distinguish Sherpa from Tamang from Newar, friendly from hostile. Perceptual learning creates differentiation where statistical exposure builds out reliable dimensions.

Boy with automation technology wows the ladies in Butwal, Nepal. Look at his face, and what do you see? Source: AP

As someone who grew up in the most rural prairie in Kansas, I can tell you this is the redneck problem: someone whose environment didn’t provide the data to build certain distinctions is vulnerable.

We knew as kids we weren’t supposed to shoot signs. Wasn’t that the whole point of shooting the signs? Our red neck was a physical marker of the environmental conditions that predicted an isolation leading to perceptual poverty.

The person who “can’t tell them apart” isn’t lazy or hostile as much as they are a product of their (often fear based) isolation. They’re accurately reporting self-imposed limited perceptual reality. Their pattern recognition system, stuck out in the fields alone, never benefited from human training data. They could identify lug nuts and clay soil yet not a single tribe of Celts.

The same problem is crippling Western synthetic face detection research related to deepfakes. And it’s a problem I’ve seen before.

Layer Problem

My mother, a linguistic anthropologist, and I published research on Nigerian 419 scams, starting nearly twenty years ago. We said intelligence brings vulnerability and published papers as such. We even presented it at RSA Conference in 2010 under the title “There’s No Patch for Social Engineering.

One of our key findings: intelligence is not a reliable defense against social engineering.

The victims of advance fee fraud weren’t stupid. They were, disproportionately, well-educated professionals such as university professors, doctors, lawyers, financial planners. People who trusted their reasoning.

I remember one day I was training law enforcement investigators and being told by them, in a windowless square room of white men bathed in drab colors and cold florescent lighting, that this concept of wider exposure would be indispensable to their fraud cases.

A 2012 study in the Journal of Personality and Social Psychology then proved our work and found the same pattern more broadly: “smarter people are more vulnerable to these thinking errors.”

These researchers, without reference to our prior work, found that higher SAT scores correlated with greater susceptibility to certain cognitive biases, partly because intelligent people trust their own reasoning from their past success and don’t notice when it’s being disrupted, being bypassed.

The attack works because it targets a layer below conscious analysis. You can’t defend against bias attacks with intelligence, because intelligence operates at the wrong layer. The defense has to match the attack surface.

I’m watching the synthetic face detection literature make the same mistake again.

Puzzle This

A paper published last month in Royal Society Open Science tested whether people could learn to spot AI-generated faces.

The results were striking but confusing.

Without training, typical observers performed below chance as they actually rated synthetic faces as more real than real ones.

This isn’t incompetence. It’s a known phenomenon called AI hyperrealism: GAN-generated faces are statistically too average, too centered in face-space, and human perception reads that as trustworthy and familiar.

Super-recognizers, the top percentile on face recognition tests, performed at chance without training. Not good, but at least not fooled by the hyperrealism effect.

Then both groups got five minutes of training on rendering artifacts: misaligned teeth, weird hairlines, asymmetric ears. The kind of glitches GANs sometimes leave behind.

Training helped, unlike the study I examined back in 2022. Trained super-recognizers hit 64% accuracy. So here’s the puzzle: the training effect was identical in both groups. Super-recognizers didn’t benefit more or less than typical observers.

The researchers’ conclusion:

SRs are using cues unrelated to rendering artefacts to detect and discriminate synthetic faces.

Super-recognizers are detecting something the researchers could not identify and therefore can’t train. The artifact training adds a second detection channel on top of whatever super-recognizers are already doing. But what they’re already doing is presented as a black box.

Wrong Layer, Again

The researchers are trying to solve a perceptual problem with instruction. “Look for the misaligned teeth” is asking the cognitive layer to do work that needs to happen at the perceptual layer.

It’s why eugenics is fraud; selecting at the genetic layer for traits that develop at the exposure layer.

It’s also the same structural error as trying to defend against social engineering with awareness training. Watch out for urgency tactics. Be suspicious of unsolicited requests. Except, of course, you still need to allow urgent unsolicited communication. Helpful, yet not helpful enough.

The instruction targets conscious reasoning. The attack targets intuition and bias. The defense operates at the wrong layer, so it fails easily, especially where attackers hit hidden bias such as racism.

The banker who never went to Africa is immensely more vulnerable to fraud with an origination story from Africa. The intelligence lacking diversity opens the vulnerability, and also explains the xenophobic defense mechanism.

Radiologists don’t learn to read X-rays by memorizing a checklist of tumor features. They look at thousands of X-rays with feedback. The pattern recognition becomes implicit. Ask an expert radiologist what they’re seeing and they’ll often say “it just looks wrong” before they can articulate the specific features.

A surgeon with training will look at hundreds of image slices of the brain on a light board in one room and know where to cut in another room down the hallway.

Japanese speakers learning English don’t acquire the /r/-/l/ distinction by being told where to put their tongue. They acquire it through exposure. Hundreds of hours of hearing the sounds in context, and their perceptual system eventually carves a boundary where none existed before.

Chicken “sexers” are the canonical example in the perceptual learning literature. They can’t tell you how they distinguish gender of day-old chicks. They just do it accurately, after enough supervised practice.

This is the pattern everywhere that humans develop perceptual expertise: data first, implicit learning, explicit understanding (maybe) later.

Five minutes of “look for the weird teeth” gets you artifact-spotting as a conscious strategy. It doesn’t build the underlying statistical model that makes synthetic faces feel wrong before you can say why. And just like with social engineering, the people who think they’re protected because they learned what to look for may be the most confidently wrong.

But the dependence in artifact-spotting also tells you something about the people who believe in it. They seek refuge in easy, routine, minimal judgement fixes for a world that requires identification, storage, evaluation and analysis. The former without the latter is just snake-oil, like placebos during a pandemic.

Compounding Vulnerability

The other-race effect is well-documented: people are worse at distinguishing faces from racial groups they haven’t had exposure to. The paper even found it in their data as participants were better at detecting synthetic faces when those faces were non-white, likely because the GANs were trained primarily on white faces and rendered other ethnicities less convincingly.

“My friend is not a gorilla”. Google trained only on Asian and white faces to prevent confusion with animals, with disastrous results. Don’t you want to know who discovered the bias in their engineering and when?

But here’s where it gets dark, like the racism of a 1930s Kodak photograph, let alone the radioactive corn Kodak secretly detected in 1946.

If you have less exposure to faces from other groups, you’re worse at distinguishing individuals within those groups. And if you’re worse at distinguishing real faces, you’re certainly worse at detecting synthetic ones.

Deepfakes may be a racism canary.

The populations most susceptible to disinformation using AI-generated faces are precisely the populations with the least perceptual defense. Isolated communities. Homogeneous environments. Places where “they all look alike” is an accurate description of perceptual reality.

An adversary running a disinformation attack campaign knows this. Target the isolationists, because of their isolation. “America First”, historically a nativist racist platform of hate, signals perception poverty.

If you’re generating fake faces to manipulate a target population, you generate faces from groups the target population has the least exposure to. The attack surface is largest where perceptual poverty runs deepest.

The redneck who “can’t tell them apart” isn’t just failing a social sensitivity test. They’re a soft target. Their impoverished face-space makes them maximally vulnerable to synthetic faces from unfamiliar groups. They can’t detect the fakes because they never learned to see the reals.

This compounds with the social engineering vulnerability. The same isolated populations are targets for both perceptual attacks (fake faces they can’t distinguish) and cognitive bias attacks (scams that bypass reasoning). The defenses being offered like artifact instruction and awareness training both fail because they target the wrong layer.

Prejudice is Perceptual Poverty

The foundation of certain kinds of hate is ignorance. Not ignorance as moral failing – ignorance as literal absence of data.

The perceptual system builds categories from exposure. Dense exposure creates fine-grained distinctions. Sparse exposure leaves regions of perceptual space undifferentiated. The person who grew up in a homogeneous environment doesn’t choose to see other groups as undifferentiated. Their visual system never got the training data to do otherwise.

This reframes prejudice, or at least a big component of it. Not attitude to be argued with. Not moral failure to be condemned. Perceptual poverty to be remediated.

And here’s the hope: the human system is plastic.

A month in Nepal fixes the Nepal problem. A year in a diverse environment builds cross-racial perceptual richness. The same neural architecture that fails to distinguish unfamiliar faces can learn to distinguish them. It just needs data.

Diversity training programs typically target attitudes. “Stereotyping is wrong and here’s why.” But you can’t lecture someone into seeing distinctions their visual system isn’t configured to make, or maybe even is damaged by years of America First rhetoric. The intervention is at the wrong layer.

What if you could train the perceptual layer directly?

The Experiment Nobody Has Run

The synthetic face detection literature keeps asking “what should we tell people to look for?” The question they should be asking is “how much exposure produces implicit detection?”

Here are the study designs, for those looking to leap ahead:

For AI detection:

  • Recruit typical observers (not super-recognizers)
  • Expose them to 500+ synthetic and real faces per day, randomly intermixed
  • Provide only real/fake feedback after each trial, no instruction on features
  • Continue for 4-6 weeks
  • Test detection accuracy at baseline, weekly during training, and post-training
  • Compare to control group receiving standard artifact instruction
  • Test whether training transfers to faces from a new GAN architecture

For deeper questions of safety:

  • Use stimuli that include faces from multiple racial/ethnic groups
  • Test whether exposure-based training improves detection equally across groups
  • Test whether it also improves cross-racial face discrimination (telling individuals apart) as a side effect
  • Measure implicit bias before and after

My prediction: exposure-based training will work for synthetic face detection, producing super-recognizer-like implicit expertise in typical observers. And as a side effect, it will build cross-racial perceptual richness.

The transfer test matters. If exposure-trained observers can detect synthetic faces from novel generators they’ve never seen, they’ve learned something general about real versus synthetic faces. If they can only detect faces similar to their training set, they’ve just memorized one architecture’s failure modes.

The cross-racial test matters more. If diverse exposure simultaneously improves AI detection and reduces perceptual other-race effects, you’ve found an intervention that works at the right layer.

Yoo Hoo Over Here

I’ve been watching security research make this mistake for twenty years.

Social engineering attacks bias. The defense offered: awareness training. Wrong layer.

Synthetic faces attack perception. The defense offered: artifact instruction. Wrong layer.

Prejudice operates partly at the perceptual level. The defense offered: diversity lectures about attitudes. Wrong layer.

In each case, the intervention appeals to conscious reasoning to solve a problem that operates below conscious reasoning. In each case, smart people are not protected – and may be more vulnerable because they trust their analysis.

The defense has to match the attack surface. You can’t patch social engineering with intelligence. You can’t patch perceptual poverty with instruction.

You patch it with data. Structured, extended, high-volume exposure that trains the layer actually under attack.

The redneck problem isn’t moral failure. It’s data deprivation.

The fix isn’t instruction. It’s exposure. The term redneck describes a remediable data deprivation, not moral defect.

So, come on inside from the deadly heat of fieldwork that killed Catherine Greene’s husband, choose three scoops out of 31 ice cream flavors, and let’s have a chat about how American pineapples and bananas really got to Kansas.

Somebody should run the study.

And hopefully this time, we will see better citations.