Less than six months after opening, Elon Musk’s Tesla Diner in Hollywood has the feel of a Rhodesian ghost town.
The celebrity chef is gone. Eric Greenspan, a Le Cordon Bleu graduate who helped build Mr Beast Burger, quietly departed and scrubbed his Instagram of any association with the venture. The hundred-person lines evaporated. The global expansion plan went the same way as Musk’s other promises, nowhere. On a recent Friday afternoon, more staff lifted fingerprints off chrome walls than there were customers.
The Guardian reports that the novelty of eating at a restaurant owned by the world’s most hated man “seems to have worn off.” A more precise diagnosis: the reputational cost of association with investments in Musk now exceeds any benefit, and the competent professionals have done the math.
Greenspan’s Instagram scrubbing is the digital equivalent of removing a company from your résumé before it gets raided. He hasn’t publicly explained his departure. He doesn’t need to. The AfD promotion in Germany and Nazi salutes at Trump’s inauguration—”repeatedly portrayed in the picket signs held by Tesla Diner protesters,” per the Guardian—made the calculation straightforward.
This is what the Musk ecosystem is all about: not dramatic collapse, but a leaky hype balloon with gradual evacuation by anyone with options, leaving behind only the true believers. The diner can absorb not being a diner. A shiny chrome dumpster fire in Hollywood is embarrassing but survivable.
Starlink is the other side of this coin.
The same week the Guardian documented the diner’s decline, Forbes published what reads like Starlink investor relations copy. Joel Shulman, who discloses financial affiliations with investment vehicles that benefit from exactly this narrative, celebrates Musk as playing “a different entrepreneurial game.”
Different is an interesting word choice. The piece inadvertently catalogs every Musk vulnerability while fraudulently framing them as strengths:
“His companies iterate faster than regulators, incumbents, and even capital markets are structured to absorb.”
The simple stupidity of raw speed is presented as true genius. It’s actually the explicit strategy of toddler-like skills, operating outside democratic accountability. The speed isn’t about innovation—it’s about fait accompli. Get the absolute worst possible version of infrastructure embedded before anyone can object.
“A vertically integrated, globally scalable communications network that bypasses nearly every legacy constraint of the telecom industry.”
Those “constraints” include safety and reliability, regulatory oversight, spectrum licensing, and the political processes that prevent private actors from controlling critical infrastructure without accountability. Bypassing them isn’t really a feature, especially after governments decide it isn’t.
“Infrastructure that governments, industries, and populations increasingly depend on.”
The Ukraine episode already demonstrated what happens when Musk controls infrastructure that anyone depends on. He toggled access based on a personal whim. The piece treats dependency as a moat. It’s actually an invitation to regulatory intervention, if not forfeiture.
“Switching costs are high where Starlink is the only viable option.”
The monopoly framing. This is the argument for why regulators will eventually act, not why they won’t. Shulman bizarrely invokes railroads and electricity as precedents for infrastructure monopolies that compound private wealth indefinitely. He appears not to have read the second half of that history.
Railroads: The Interstate Commerce Act of 1887. Federal rate regulation. Antitrust action. Eventually nationalization of passenger rail. The robber baron era ended precisely because railroad dependency triggered democratic backlash.
Electricity: Heavily regulated as a public utility. Rate-setting by state commissions. Must-serve obligations. Prohibition on discriminatory pricing.
The monopoly dream Shulman celebrates was tamed by regulation in every historical instance he cites. Rockefeller’s Standard Oil was broken up. AT&T was broken up. The Gilded Age produced the Progressive Era.
“Infrastructure makes it permanent,” Shulman writes, as if history ends at the moment of monopoly formation.
It doesn’t.
The political economy of essential infrastructure has a second act: public assertion of control over private power to prevent catastrophe. He’s describing the conditions under which democratic societies historically decide that private control of critical infrastructure is obviously unacceptable.
Apparently he wants to rewrite history, or just doesn’t realize he’s making the argument against himself.
The Tesla diner shows the trajectory. The Forbes piece shows the radical investor class hasn’t noticed.
When the competent people flee and only the loyalists remain—people selected for devotion rather than capability—you get soggy industrial fries served in a soulless, empty and shiny corporate diner.
That’s the optimistic scenario.
The pessimistic scenario is the same dynamic applied to global communications infrastructure that governments and militaries depend on. An erratic autocrat who has already demonstrated he’ll use infrastructure access as political leverage. A workforce increasingly selected for loyalty over competence. No democratic accountability structure. Explicitly designed to outrun regulation.
Starlink is exposed as an erratic, autocratic, global communications infrastructure, maintained by a loyalty cult.
The diner is the proof of concept—showing exactly what happens when the reputational toxicity reaches escape velocity and the professionals calculate their exit.
A New York family-owned grocery chain investing in eyes/voice/face biometric collection infrastructure just for shoplifting prevention doesn’t quite add up economically.
It reminds me how IBM pushed license plate reader technology onto NYC bridges in 1966.
Wegmans launched facial recognition in October 2024 at its Brooklyn Navy Yard location, initially claiming it would delete data from non-consenting shoppers. It’s a notable claim given a $400,000 settlement with New York’s Attorney General over a data breach exposing 3 million consumers.
It’s also notable given the FTC’s 2023 Rite Aid enforcement action, revealing the chain used facial recognition in “hundreds of stores” from 2012-2020, generating “thousands of false-positive matches” that disproportionately flagged women and people of color. Rite Aid received a five-year facial recognition ban and was required to delete both images and algorithms developed from collected data.
By 2025, the Wegmans program expanded to all NYC stores with a critical policy change: signage now indicates collection of face, eye, and voice data from all shoppers, and the promise to delete non-participant data was removed.
Wegmans’ privacy policy claims biometric collection is “limited to facial recognition information” yet their in-store signage says face, eye, and voice data are collected, a large discrepancy the company has not explained.
The opacity of Wegmans’ specific arrangements—refusing to disclose its vendor, data retention policies, or law enforcement sharing practices—suggests awareness that basic levels of transparency might reveal uncomfortable interdependencies of a growing surveillance economy that shifts costs to taxpayers (through grants and policing partnerships), shares risks across industry consortiums, and potentially opens future monetization pathways.
Wegmans’ own privacy policy states:
We may provide Security Information to law enforcement for investigations, to prevent fraud, or for safety and security purposes.
Think twice about the real price. At Wegmans, the bananas capture and sell you.
When an outsider gets off a plane in Nepal for the first time, all the faces in the airport crowd blur together. A month later, they see Tibetans, Indians, Chinese, Nepalese. Mountain faces, valley faces. Nobody teaches the outsider what to look for. They just experience exposure and the human perceptual system builds the categories.
When a mountain village Maoist teenager points an AK-47 at that outsider, the out-of-place hostile appearance becomes obvious, yet is identified far too late. The outsider arrived with a collapsed face-space for South Asians. A month later, the axes develop to distinguish Sherpa from Tamang from Newar, friendly from hostile. Perceptual learning creates differentiation where statistical exposure builds out reliable dimensions.
Boy with automation technology wows the ladies in Butwal, Nepal. Look at his face, and what do you see? Source: AP
As someone who grew up in the most rural prairie in Kansas, I can tell you this is the redneck problem: someone whose environment didn’t provide the data to build certain distinctions is vulnerable.
We knew as kids we weren’t supposed to shoot signs. Wasn’t that the whole point of shooting the signs? Our red neck was a physical marker of the environmental conditions that predicted an isolation leading to perceptual poverty.
The person who “can’t tell them apart” isn’t lazy or hostile as much as they are a product of their (often fear based) isolation. They’re accurately reporting self-imposed limited perceptual reality. Their pattern recognition system, stuck out in the fields alone, never benefited from human training data. They could identify lug nuts and clay soil yet not a single tribe of Celts.
The same problem is crippling Western synthetic face detection research related to deepfakes. And it’s a problem I’ve seen before.
Layer Problem
My mother, a linguistic anthropologist, and I published research on Nigerian 419 scams, starting nearly twenty years ago. We said intelligence brings vulnerability and published papers as such. We even presented it at RSA Conference in 2010 under the title “There’s No Patch for Social Engineering.”
One of our key findings: intelligence is not a reliable defense against social engineering.
The victims of advance fee fraud weren’t stupid. They were, disproportionately, well-educated professionals such as university professors, doctors, lawyers, financial planners. People who trusted their reasoning.
I remember one day I was training law enforcement investigators and being told by them, in a windowless square room of white men bathed in drab colors and cold florescent lighting, that this concept of wider exposure would be indispensable to their fraud cases.
A 2012 study in the Journal of Personality and Social Psychology then proved our work and found the same pattern more broadly: “smarter people are more vulnerable to these thinking errors.”
These researchers, without reference to our prior work, found that higher SAT scores correlated with greater susceptibility to certain cognitive biases, partly because intelligent people trust their own reasoning from their past success and don’t notice when it’s being disrupted, being bypassed.
The attack works because it targets a layer below conscious analysis. You can’t defend against bias attacks with intelligence, because intelligence operates at the wrong layer. The defense has to match the attack surface.
I’m watching the synthetic face detection literature make the same mistake again.
Puzzle This
A paper published last month in Royal Society Open Science tested whether people could learn to spot AI-generated faces.
The results were striking but confusing.
Without training, typical observers performed below chance as they actually rated synthetic faces as more real than real ones.
This isn’t incompetence. It’s a known phenomenon called AI hyperrealism: GAN-generated faces are statistically too average, too centered in face-space, and human perception reads that as trustworthy and familiar.
Super-recognizers, the top percentile on face recognition tests, performed at chance without training. Not good, but at least not fooled by the hyperrealism effect.
Then both groups got five minutes of training on rendering artifacts: misaligned teeth, weird hairlines, asymmetric ears. The kind of glitches GANs sometimes leave behind.
Training helped, unlike the study I examined back in 2022. Trained super-recognizers hit 64% accuracy. So here’s the puzzle: the training effect was identical in both groups. Super-recognizers didn’t benefit more or less than typical observers.
The researchers’ conclusion:
SRs are using cues unrelated to rendering artefacts to detect and discriminate synthetic faces.
Super-recognizers are detecting something the researchers could not identify and therefore can’t train. The artifact training adds a second detection channel on top of whatever super-recognizers are already doing. But what they’re already doing is presented as a black box.
Wrong Layer, Again
The researchers are trying to solve a perceptual problem with instruction. “Look for the misaligned teeth” is asking the cognitive layer to do work that needs to happen at the perceptual layer.
It’s why eugenics is fraud; selecting at the genetic layer for traits that develop at the exposure layer.
It’s also the same structural error as trying to defend against social engineering with awareness training. Watch out for urgency tactics. Be suspicious of unsolicited requests. Except, of course, you still need to allow urgent unsolicited communication. Helpful, yet not helpful enough.
The instruction targets conscious reasoning. The attack targets intuition and bias. The defense operates at the wrong layer, so it fails easily, especially where attackers hit hidden bias such as racism.
The banker who never went to Africa is immensely more vulnerable to fraud with an origination story from Africa. The intelligence lacking diversity opens the vulnerability, and also explains the xenophobic defense mechanism.
Radiologists don’t learn to read X-rays by memorizing a checklist of tumor features. They look at thousands of X-rays with feedback. The pattern recognition becomes implicit. Ask an expert radiologist what they’re seeing and they’ll often say “it just looks wrong” before they can articulate the specific features.
A surgeon with training will look at hundreds of image slices of the brain on a light board in one room and know where to cut in another room down the hallway.
Japanese speakers learning English don’t acquire the /r/-/l/ distinction by being told where to put their tongue. They acquire it through exposure. Hundreds of hours of hearing the sounds in context, and their perceptual system eventually carves a boundary where none existed before.
Chicken “sexers” are the canonical example in the perceptual learning literature. They can’t tell you how they distinguish gender of day-old chicks. They just do it accurately, after enough supervised practice.
This is the pattern everywhere that humans develop perceptual expertise: data first, implicit learning, explicit understanding (maybe) later.
Five minutes of “look for the weird teeth” gets you artifact-spotting as a conscious strategy. It doesn’t build the underlying statistical model that makes synthetic faces feel wrong before you can say why. And just like with social engineering, the people who think they’re protected because they learned what to look for may be the most confidently wrong.
But the dependence in artifact-spotting also tells you something about the people who believe in it. They seek refuge in easy, routine, minimal judgement fixes for a world that requires identification, storage, evaluation and analysis. The former without the latter is just snake-oil, like placebos during a pandemic.
Compounding Vulnerability
The other-race effect is well-documented: people are worse at distinguishing faces from racial groups they haven’t had exposure to. The paper even found it in their data as participants were better at detecting synthetic faces when those faces were non-white, likely because the GANs were trained primarily on white faces and rendered other ethnicities less convincingly.
“My friend is not a gorilla”. Google trained only on Asian and white faces to prevent confusion with animals, with disastrous results. Don’t you want to know who discovered the bias in their engineering and when?
If you have less exposure to faces from other groups, you’re worse at distinguishing individuals within those groups. And if you’re worse at distinguishing real faces, you’re certainly worse at detecting synthetic ones.
Deepfakes may be a racism canary.
The populations most susceptible to disinformation using AI-generated faces are precisely the populations with the least perceptual defense. Isolated communities. Homogeneous environments. Places where “they all look alike” is an accurate description of perceptual reality.
An adversary running a disinformation attack campaign knows this. Target the isolationists, because of their isolation. “America First”, historically a nativist racist platform of hate, signals perception poverty.
If you’re generating fake faces to manipulate a target population, you generate faces from groups the target population has the least exposure to. The attack surface is largest where perceptual poverty runs deepest.
The redneck who “can’t tell them apart” isn’t just failing a social sensitivity test. They’re a soft target. Their impoverished face-space makes them maximally vulnerable to synthetic faces from unfamiliar groups. They can’t detect the fakes because they never learned to see the reals.
This compounds with the social engineering vulnerability. The same isolated populations are targets for both perceptual attacks (fake faces they can’t distinguish) and cognitive bias attacks (scams that bypass reasoning). The defenses being offered like artifact instruction and awareness training both fail because they target the wrong layer.
Prejudice is Perceptual Poverty
The foundation of certain kinds of hate is ignorance. Not ignorance as moral failing – ignorance as literal absence of data.
The perceptual system builds categories from exposure. Dense exposure creates fine-grained distinctions. Sparse exposure leaves regions of perceptual space undifferentiated. The person who grew up in a homogeneous environment doesn’t choose to see other groups as undifferentiated. Their visual system never got the training data to do otherwise.
This reframes prejudice, or at least a big component of it. Not attitude to be argued with. Not moral failure to be condemned. Perceptual poverty to be remediated.
And here’s the hope: the human system is plastic.
A month in Nepal fixes the Nepal problem. A year in a diverse environment builds cross-racial perceptual richness. The same neural architecture that fails to distinguish unfamiliar faces can learn to distinguish them. It just needs data.
Diversity training programs typically target attitudes. “Stereotyping is wrong and here’s why.” But you can’t lecture someone into seeing distinctions their visual system isn’t configured to make, or maybe even is damaged by years of America First rhetoric. The intervention is at the wrong layer.
What if you could train the perceptual layer directly?
The Experiment Nobody Has Run
The synthetic face detection literature keeps asking “what should we tell people to look for?” The question they should be asking is “how much exposure produces implicit detection?”
Here are the study designs, for those looking to leap ahead:
For AI detection:
Recruit typical observers (not super-recognizers)
Expose them to 500+ synthetic and real faces per day, randomly intermixed
Provide only real/fake feedback after each trial, no instruction on features
Continue for 4-6 weeks
Test detection accuracy at baseline, weekly during training, and post-training
Compare to control group receiving standard artifact instruction
Test whether training transfers to faces from a new GAN architecture
For deeper questions of safety:
Use stimuli that include faces from multiple racial/ethnic groups
Test whether exposure-based training improves detection equally across groups
Test whether it also improves cross-racial face discrimination (telling individuals apart) as a side effect
Measure implicit bias before and after
My prediction: exposure-based training will work for synthetic face detection, producing super-recognizer-like implicit expertise in typical observers. And as a side effect, it will build cross-racial perceptual richness.
The transfer test matters. If exposure-trained observers can detect synthetic faces from novel generators they’ve never seen, they’ve learned something general about real versus synthetic faces. If they can only detect faces similar to their training set, they’ve just memorized one architecture’s failure modes.
The cross-racial test matters more. If diverse exposure simultaneously improves AI detection and reduces perceptual other-race effects, you’ve found an intervention that works at the right layer.
Yoo Hoo Over Here
I’ve been watching security research make this mistake for twenty years.
Social engineering attacks bias. The defense offered: awareness training. Wrong layer.
Prejudice operates partly at the perceptual level. The defense offered: diversity lectures about attitudes. Wrong layer.
In each case, the intervention appeals to conscious reasoning to solve a problem that operates below conscious reasoning. In each case, smart people are not protected – and may be more vulnerable because they trust their analysis.
The defense has to match the attack surface. You can’t patch social engineering with intelligence. You can’t patch perceptual poverty with instruction.
You patch it with data. Structured, extended, high-volume exposure that trains the layer actually under attack.
The redneck problem isn’t moral failure. It’s data deprivation.
The fix isn’t instruction. It’s exposure. The term redneck describes a remediable data deprivation, not moral defect.
One does not typically expect to find oneself arguing with a film’s color palette for Nazis. Yet here we are. A new Italian film isn’t making just a palette mistake, however, it’s systematically reconstructing fascism as its exact opposite.
Silvio Soldini’s Le assaggiatrici (2025) is based on Rosella Postorino’s bestselling 2018 Italian novel by the same name about Hitler’s food tasters at the Wolfsschanze. In German it’s titled Die Vorkosterinnen.
The book cover features a seductive red butterfly that obscures an Aryan model, as imposed red lipstick defines her identity. The red of Nazi ideology appears to be consuming her, in a book about forced consumption or death.
It has arrived to generally favourable notices. The performances are creditable. The tension is effectively sustained. The director has stated, in interviews with Deutsche Welle and elsewhere, that he prioritises “emotional truth” over historical precision, which seems like a defensible artistic position, and one that accounts for certain liberties taken with the source material.
What it does not account for is the film’s extraordinary disinformation decision to wash the entire Nazi apparatus in petrol (teal).
Chromatic History of National Socialism
Adolf Hitler was many things. Indifferent to visual propaganda definitely was not among them.
His very particular selection of red, white, and black for the visual identity of a Nazi was not accidental. Hitler addressed the question directly in Mein Kampf, explaining that Imperial German red was deliberately chosen for psychological impact. He wanted its association with revolution, its capacity to command attention, its physiological effect on the blood and nerves. The Nuremberg rallies were intentionally seas of red. The swastika banner was designed, by Hitler’s own account, to be impossible to ignore.
This was, one must acknowledge, a propaganda achievement from the lessons of WWI (e.g. Woodrow Wilson’s belief in spectacle as a weapon, leading to Edward Bernay’s publication of a propaganda bible). The Nazis understood from the last war, if not many before them, that militant power and rapid disruption comes not merely through argument but through aesthetic experience. The red was aggressive, confident, seductive. It promised antithesis, rupture, transformation. It stirred.
Historians have documented this extensively, leaving zero doubt. The visual architecture of fascism was Albert Speer’s Cathedral of Light, Leni Riefenstahl’s geometric masses of uniformed bodies, and most of all the omnipresent crimson banners.
1939 Nazi red banners contrasted sharply and covered everything, like the MAGA hat today. Source: Hugo Jaeger/Life Pictures/Shutterstock
The threat of burgundy covering Europe was not incidental to National Socialism but constitutive of it.
The Fiction of a Teal Reich
In Soldini’s film, none of this exists.
The SS uniforms, which on set were presumably some variant of field grey, have been color-graded into a cold greenish blue. This is what Europeans might call petrol, or an American teal. The train carriages are teal. The Wolfsschanze shadows are teal. The very air of occupied Poland appears to have been filtered through Caribbean seawater.
Americans thinking of azure blue vacations of peace and tranquility will be shocked to find this movie painting SS officers in the wrong palette.
Meanwhile, the women who are the victims, unwilling food tasters conscripted into service under threat of death, are dressed almost uniformly in burgundy and brown.
Warm tones. The color family of the swastika banner is applied to the victims, as if to invoke and rehydrate the Hitler propaganda of young beautiful Aryan women in danger. Even the protagonist’s name is Rose!
The shallow symbolic intention seems transparent: teal is meant to convey cold machinery of death versus flushed cheeks of red as a warm human vulnerability. Petroleum versus blood. It is the sort of color theory one encounters in undergraduate film studies seminars, and it is executed competently enough.
The difficulty is that it ends up ironically being fascist propaganda because it is precisely backwards.
Hitler Was an Inversion Artist
Consider what the audience is being taught.
A viewer encountering this film, especially the younger viewer for whom the Second World War is ancient history, absorbs the following visual grammar: Fascism is cold. Fascism is teal and grey and clinical. Fascism looks like a hospital corridor, or a Baltic winter, or an industrial refrigeration unit.
Die Vorkosterinnen depicts Nazi uniforms and machinery only in hues of teal. The SA literally were called “Brownshirts” when they seized power and destroyed democracy along with black-clad SS. An earth grey (erdgrau) shift was later during war.
False.
This is not what fascism looked like. It rose, in fact, as the exact opposite.
Source: “Hitler and the Germans” exhibit at the German Historical Museum, Berlin.
Fascism in Germany was always meant by Hitler to be red hot. It was his vision of Imperial red, white and black for stirring reactions and emotive attachment. It was torchlight and drums and the intoxication of abrupt mass belonging and sudden purpose. It was institutional drug and drink abuse to dispense rapid highs.
The Nazis did not present themselves as slow and precise, bureaucrats of byzantine rules. That was how they aspired to operate, but not how they recruited or actually functioned. They presented themselves as easy vitality, as rapid revolution, as blood and fire and national resurrection.
They were the cheap promise and marketing of Red Bull, Monster drink, 5 hour energy shot, not bowls of slow cooked hearty soup and vegetables with cream. “Fanta” was the Nazi division of Coca Cola, marketed like a Genozid Fantasie in a bottle.
Fanta was created by Coca-Cola to profit from Nazi Germany, avoiding sanctions. It was industrial food byproducts (apple waste, milk waste), marketed as a health drink using a word short for “fantasy”, because it was all about swallowing lies.
The women, meanwhile, would not have dressed in coordinated burgundy. They were rural conscripts and Berlin refugees. They wore what they had. But even setting aside questions of costume accuracy, there is something perverse about rendering victims in the color palette of the perpetrator’s own propaganda. Notably the women also are portrayed as the smoking, drinking and promiscuous ones, while the Nazis are falsely described as teetotalers.
This reversal is painful to see, as Nazis are played in the film as completely inverted to what makes Nazism so dangerous.
“Emotional Truth” and Its Discontents
Director Soldini has explained that historical precision matters less to him than achieving an emotional resonance. One sympathises with the artistic impulse to generate ticket sales. The film is definitely not a documentary, and accuracy is a burden that can produce its own distortions that don’t translate well to audience growth.
But “emotional truth” is not a free pass to rehydrate Nazism. If your emotional symbolism teaches audiences to look for the wrong visual signatures, if it trains them to associate fascism with cold clinical teal rather than seductive aggressive red, then your emotional truth is propagating a functional falsehood that is dangerous.
This disinformation risk matters far more today than it might have in 1995 or 2005. We are presently surrounded by political movements that borrow freely from the fascist playbook whilst their critics struggle to name what they are seeing. A large part of that struggle is visual.
People have been taught, through decades of erroneously toxic films like this one, that fascism is ugly, grey uniforms and clinical efficiency and cold industrial murder. It was not.
They have not been taught that it looks like rallies of red hats and the intoxication of belonging to something larger than oneself.
Every member of Huntington Beach City Council pose for a photo wearing red “Make Huntington Beach Great Again” hats at a swearing-in ceremony on 3 Dec 2024.
They have not been taught to recognize the aesthetic of hot, rapid seduction and “day one” promises of disruption.
Hollywood Teal
One must also note that Soldini is operating within a system. The teal-and-orange color grade has become so pervasive in contemporary cinema that it functions as a kind of default reference.
He pulled the visual equivalent of scoring every emotional beat with swelling orchestra strings. Teal is what films lean on for tension, ignoring the fact that many people dream of holidays in a typical Caribbean blue scene like a Corona ad.
This creates a particular problem for historical cinema. When every thriller, every dystopia, every prestige drama reaches for the same cool teal palette to signal “this is danger,” the color loses its actual meaning.
It becomes mere convention.
And when that convention is misleadingly applied to the Third Reich, it overwrites the actual chromatic signature of the period with a contemporary aesthetic that signifies nothing more than “this film is a color-by-number for cinematic bad things.”
The Nazis were not teal.
But teal is the reduced palette of what serious films dip into, so the Nazis get rehydrated as such. And viewers start embracing Nazism again while thinking the cool, calm drab good guys are the enemy (as targeted by hot-headed attention seeking rage lords).
White nationalist Nick Fuentes has said repeatedly the racist MAGA is the racist America First and that is exactly what he wants.
We Train Eyes to See the Train
One of the most annoying aspects of the film (SPOILER ALERT) is the director abruptly kills the Jew for trying to board the train of freedom. Of course in history the Nazi trains actually symbolize concentration camps, where anyone boarding faced almost certain death. Yet here’s a film that shows the inversion with trains as the freedom trail for the idealized Aryan woman working for Hitler, while the Jew was denied the ride.
The inspiration for the love story between Rosa and [SS leader] Ziegler stems from Woelk’s statement that an officer put her on a train to Berlin in 1944 to save her from the advancing Red Army, the armed forces of the Soviet Union. She later learned that all the other food tasters had been shot by Soviet soldiers.
That’s Nazi propaganda pulled forward, pure and unadulterated.
The love story in the film frames the SS leader as kind hearted savior, as he is shooting a Jew in the back so she couldn’t be liberated by approaching Allied soldiers, yet “saving” the Aryan girl by gifting her a rare spot on a Nazi train.
The film covers the protagonist’s hands in the blood of the Jewish woman murdered by her SS lover, blood she stares at on the train, perhaps to emphasize how the Swastika was believed to be a symbol of being lucky at birth. She lived to be 91 thanks to the SS, who made sure that a Jewish woman didn’t get a spot on that train, just a bullet in the back.
And just to be clear, Judenhilfe (hiding or even befriending a Jew) was a capital crime for years, eliminating all doubt by killing anyone who doubted. An Aryan woman caught running beside the Jewish woman she was helping and defending would not have been spared when a SS officer opened fire. In the worsening Nazism logic over time, and thus especially by 1945, it would be like a policeman shooting the passenger in a criminal getaway car and then offering the driver a can of gas.
There is a reason disinformation historians care about such visual culture. Political movements are recognised, and hidden, partly through their weaponization of aesthetics. The person who knows that fascism comes wrapped in red flags of instant vitality and promises of national greatness is better equipped to identify it than the person who has been taught to feel disgust for cool grey of law and order, to hate calm bureaucrats in clinical blue corridors.
Soldini’s film, whatever its other merits, trains eyes to see the exact wrong thing. The good guy palette in reality is flipped to evil, audiences are pushed to embrace the palette of Hitler’s violent hate.
Logical inversion (Murderous SS as loving saviors)
Soldini color-corrects and codifies fascism into something unrecognisable, antithetical. In doing so, it makes the real thing far harder to recognize correctly today when it flashes itself all around us, signaling as it always has.
The Spanish edition’s cover designer understood something Soldini didn’t. The RED APPLE is the focal point as the danger, the temptation, the poison risk. It sits against cool grey tones. The red is what threatens. The grey is the safety and institutional backdrop.
a blog about the poetry of information security, since 1995