The Priest Who Hated the Printing Press: Who Benefits from AI Anxiety?

The danger of enlightenment was never to the thinker, it was that the thinker endangered the legitimacy of those who held interpretive monopoly.

The Church didn’t burn heretics because independent thought was so hard to do well; they burned them to prevent a contagious independence that threatened the entire apparatus that converted doctrinal control into material power.

It’s like worrying that a machine gun will show up at a jousting tournament.

The current professional-managerial class of journalists, academics, lawyers, doctors, and policy fellows built their authority on credentialing systems that controlled who got to interpret reality for others.

AI will indeed make some people think less. Look at all the people who buy a Tesla and let it kill them by driving into a tree. Yet AI is so broken without careful thought that it also produces the reverse effect, where people use it to think more. It’s like saying guns with a better scope and repeat mechanism would make people hunt less; it breaks the bottleneck for those who know the right end of a rifle.

Suddenly anyone can generate plausible legal analysis, medical interpretation, policy framing at much greater speeds. The quality varies by the person aiming the barrel, but so did the quality of what the credentialed class produced—we just weren’t supposed to try and control their integrity. And that’s actually going to mean a very, very big problem with AI.

De Weck’s new essay in the Guardian is, structurally, a priest who is worried about a printing press.

Our king, our priest, our feudal lord – how AI is taking us back to the dark ages

Or perhaps more to the point, he is poking the mob about lighting torches and grabbing pitchforks to throw a modern printing press into the river. That’s different than a printing press engineer warning about safe use of the machine. Who benefits from the anxiety?

He frames it as concern for the soul of the congregation, but the actual anxiety is positional. A “fellow with the Foreign Policy Research Institute” depends on a system where the privilege to dispense foreign policy interpretation was made scarce and gatekept. AI abruptly makes his particular form of cognitive labor rather abundant and very cheap.

I’m not much for dwelling on Kant, but I know the honest Kantian question is far from “will people stop thinking?” No, Kant would demand “which institutions currently profit from monopolizing thought-on-behalf-of-others, and what happens when that monopoly breaks?”

The true Kantian would require the professional class to examine its own complicity in manufacturing a scarcity and dependency that it now laments.

Distributed standard measures of integrity should have been planned long before the last three decades of technology, when personal compute power directly questioned the closed approach to controlling knowledge. Fortunately, we have some new independent ideas…

Historiography of Operation Cowboy, April 1945

Operation Cowboy timing in WWII is damning, when you look at the calendar:

  • April 23, 1945: Flossenbürg concentration camp liberated by Patton’s Third Army. Most of the prisoners were sent out on death marches throughout Bavaria as Allied troops approached. American soldiers found only 1,500 survivors amid mass graves; 30,000 had died there and many more in the marches to prevent their liberation. Dietrich Bonhoeffer was executed April 9, 1945, just two weeks before Patton rolled up.
  • April 28, 1945: Operation Cowboy launches for Patton to rush ahead and liberate a Nazi veterinarian center with 600 captured Russian horses, alongside aristocratic breeds, before the Soviets would.
  • April 29, 1945: The 42nd and 45th Infantry Divisions and the 20th Armored Division of the US Army liberate approximately 32,000 prisoners at Dachau, far fewer than the 67,000 registered there just three days before.

The same Army, the same week.

One order under Patton got a romantic mission name, a special task force authorization, all the artillery barrages it needed to clear a path, and the unprecedented decision to arm surrendered Wehrmacht soldiers. It’s been repackaged ever since as a feel-good story of liberation from Nazism.

The other efforts were… happening.

Why the delta? Patton’s postwar diary entries about Jewish displaced persons are notorious. He described humans in subhuman terms, complained about having to allocate resources to concentration camps, and compared survivors unfavorably to the Germans he was occupying. If only they had been aristocratic horses instead. Patton even came away from death camps wanting to rearm Germany almost immediately, which is damning on its own.

Not a show horse. Eugen Plappert, ca. 1930, with his many athletic medals. Imprisoned in Flossenbürg under 1938 Nazi “preventive detention,” he was told in 1942 by the SS they were sending him to a country estate for health. It was a lie, they killed him with gas on May 12.

Operation Cowboy wasn’t an aberration or exception, as it was perfectly consistent with Patton’s worldview of who and what mattered.

  • Aristocratic horses? European civilization worth a rush to preserve.
  • Wehrmacht officers? Professionals to work with.
  • Death camp survivors? A logistics problem.

The tell is in what gets reported by Military.com as “beautiful”:

We were so tired of death and destruction, we wanted to do something beautiful.

They were surrounded by death and destruction that week. They chose which rescue operation would be counted as “beautiful.”

Historiography in Plain Sight

This story gets retold as heartwarming.

Military.com now runs it as holiday content. Disney made a bizarre movie called Miracle of the White Stallions (1963).

The feel-good framing launders what is being revealed about genocide and what Patton considered worth saving; command attention and priority signaling matter.

Stills from Disney’s 1963 movie: the Habsburg aristocratic pageantry worth preserving (top) and the “sympathetic” enemy general in uniform with Nazi eagle (bottom). The film is not currently available on Disney+.

Military.com notably tells us Patton approved horse rescue “immediately” with “Get them. Make it fast.” The urgency for hundreds of aristocratic show horses is documented. The urgency for tens of thousands of human prisoners was not.

“Special prisoner barracks. Drawing from memory by Colonel Hans M. Lunding, head of the Danish intelligence service and cellmate of Admiral Canaris.” Source: dietrich-bonhoeffer.net

2025 Blackout: Waymo Shit on San Francisco Because That Was Always the Plan

Digital Manure Is the Point of Technocratic Debauchery

On Saturday, December 20, the infamously unreliable PG&E infrastructure again failed to prevent a fire, and a third of San Francisco was left in darkness.

Traffic lights didn’t light. The city’s Department of Emergency Management naturally urged restraint, asking everyone to stay home so emergency responders could be unfettered.

Instead, across the city, Waymo’s fleet of 800 to 1,000 robotaxis blocked streets by doing the exact opposite: stopping in intersections, impeding buses, and according to city officials, delaying fire department response to the substation blaze itself and a second fire in Chinatown.

Waymo logic was to make the disaster worse, and become another disaster itself, as if massive PG&E failures aren’t a known and expected annual pain in California.

…Waymo vehicles didn’t pull over to the side of the road or seek out a parking space. Nor did they treat intersections as four-way stops, as a human would have. Instead, they just … sat there with their hazard lights on, like a student driver freezing up before their big parallel-parking test. Several Waymo vehicles got stuck in the middle of a busy intersection, causing a traffic jam. Another robotaxi blocked a city bus.

The company then dropped a disinformation bomb from the corporate blog three days later.

Waymo described itself as “the world’s most trusted driver” in the aftermath of being completely untrustworthy. It tried to distract readers how it “successfully traversed more than 7,000 dark signals.” That would be like saying only look at all the cats and dogs it hasn’t murdered yet. Finally, Waymo promised that it was “undaunted”, in a post explaining why it just manually pulled its entire fleet off the streets. Sounds daunted to me.

The ironies are irresistible, as evidence of cognitive dissonance driving their groundless rhetoric, but the deeper story is why their technology exists, who it serves, and what its presence on public streets reveals about political corruption in California.

Big Tech’s Prancing Princes

In the eighteenth and nineteenth centuries, aristocratic carriages were an unfortunate and toxic fixture of European cities. The overly large and heavy boxes moved unaccountably through streets built and maintained by public taxation and peasant labor. They spread manure for public workers or the conscripted poor to chase and clean. When the elites ran over pedestrians, which happened regularly, there was no legal recourse. Commoners stepped in fear and bore costs, as the privileged passed without paying.

The carriage was far more than transportation. It was the physical assertion of hierarchy in public spaces. Its externalities of the manure, the danger, the congestion, and the noise were all pushed out to be absorbed by everyone else. Its benefits accrued to one among many.

The term “car” is rooted in this ideology of inconvenience to others as status, where overpriced cages for the few consume public space with wasteful and harmful outcomes to the many.

The urban carriage (car) concept, of exclusive wasted public space, is as absurd as it looks and operates.

Waymo’s cars are a mistake in history pulled forward, to function identically to the worst carriages. They move through streets built and maintained by public funds. When they fail, when they kill, emergency responders and traffic police manage the consequences. When they impede ambulances and fire trucks, the costs are measured in delayed response times and lives lost. The company operates under regulatory protection from a captured state agency, the California Public Utilities Commission (CPUC), which forced corporate control and expansion into public spaces over the unanimous objection of public servants including San Francisco’s fire chief, city attorney, and transportation officials.

The parallel is not metaphorical.

It is structural.

Algorithms That Can’t Predict Probable Disasters Should Be Accountable for Causing Disasters

Waymo’s blog post falsely tries to spin PG&E disaster as “a unique challenge for autonomous technology.”

Unique? A power outage in California?

This is false.

It’s so false, it makes Waymo look like they can’t be trusted with automation, let alone operations.

Imagine Waymo saying they see stop signs and moisture as unique challenges. Yeah, Tesla has literally failed at both of those “challenges”, to give you an idea of how clueless the tech “elites” are these days.

Mountain View Police stopped a driverless car in 2015 for being too slow. Google engineers publicly excused themselves by announcing they had not read the state traffic laws. A year later the same car became stuck in a roundabout. Again, the best and brightest engineers at Google, fresh out of Berkeley, claimed they were simply ignorant of traffic laws and history.

Power outages in California are not unique or rare. They are a heavily documented and often recurring feature of the state’s privatized infrastructure. Enron very cruelly, as you may remember, intentionally caused brownouts in California in order to artificially raise prices and spike profits. Silicon Valley veterans build disaster recovery plans around outages, due to recurring events like this, which absolutely forecast PG&E failures. It’s engineering 101.

In October 2019, PG&E cancelled power to over three million Californians in a cynically titled “Public Safety Power Shutoff”. This is the equivalent of someone telling you to pull the power plug on your computer if you are worried about malware. The utility has conducted such shutoffs every year since, as a means of delaying and avoiding basic safety upgrades to infrastructure. A federal judge overseeing PG&E’s criminal probation explained the situation with candor that isn’t even shocking anymore:

…cheated on maintenance of its grid—to the point that the grid became unsafe to operate during our annual high winds.

A year later, with this environmental reality of annual outages in full view, Waymo opened operations in San Francisco. It received approval for commercial service in August 2023. In that time, the company boasted that it accumulated what it calls “100 million miles of fully autonomous driving experience.”

Uh-huh. It created a calculator that could do 2+2=4 around 100 million times. But apparently the engineers never thought about subtraction. Outcomes are different than just measuring outputs.

When the lights went out, in an event that happens so regularly in California it’s become normal, the vehicles froze, requiring human remote confirmation to navigate dark traffic signals.

Dare I ask if Waymo ever considered an earthquake as a possibility in the state famous for its earthquakes and… power outages?

The company describes its baseline confirmation requirements as if it made “an abundance of caution during our early deployment.”

Anyone can now plainly see that, after seven years, such a disinformation phrase strains all credulity. What it reveals is Waymo is deploying an ivory tower system designed for a few gilded elites expecting idealized infrastructure, which doesn’t exist in the state where it operates.

Waymo Benefits, But Who Pays

Waymo is as unprofitable as a San Francisco cheese store in December.

“PG&E fucked us,” Lovett said. “We’re not talking about 120 bucks worth of cheese, dude. We’re talking about anywhere between 12 and 15 grand in sales that I’m not getting back.”

Bank of America estimates Waymo burned $1.5 billion in 2024 against a revenue stream of $50 to $75 million. Alphabet’s “Other Bets” segment reported losses of $1.12 billion in Q3 2024 alone across all their gambling with public safety. The company is valued at $45 billion because of investor confidence in privatisation and monopolization of public mobility (rent seeking streets, taxation without representation), and not current earnings.

The beneficiaries are no mystery: Alphabet shareholders, venture capital, the 2,500 employees at Waymo, and well-heeled lazy riders seeking exclusion and novelty.

The payers are equally clear: emergency responders forced to deal with stalled vehicles, taxpayers who fund infrastructure and regulatory agencies, other road users stuck in gridlock, and potential victims of delayed emergency response.

This is like the days of the giant double-decker empty idling Google Bus parked at public bus stops blocking transit and delaying city workers, because… Google execs DGAF about the public.

Google buses throughout the 2010s regularly blocked public stops and bike lanes, creating transit denial of service. The company pitched the system to staff as a velvet roped shelter from participation in community, similar to their infamous cafeterias.

Research on Big Tech privatization of streets as harmful to the public is unambiguous. Fire damage doubles every 30-60 seconds per NFPA research on active suppression delay. For cardiac emergencies, each minute of delayed response translates to significant additional hospital costs and measurable increases in mortality.

During Saturday’s blackout, Waymo vehicles contributed to delays at two active fires and created gridlock that impeded bus transit for hours. It’s the Google Bus denial of service disaster all over again, yet hundreds of times worse because a distributed denial of service.

No penalty has been assessed yet. The CPUC, which regulates both PG&E and Waymo, responded by saying it has “staff looking into both incidents.” Water is apparently wet. The agency has a history of “looking” that doesn’t look good. It is most known for unanimously waiving a $200 million fine against PG&E in 2020 for electrical equipment failures linked to fires that killed more than 100 people.

Waymo could murder a hundred people in the streets and no one would be surprised if the CPUC said hopes and prayers, looking into it, ad nauseam.

Reverse Exclusion

Traditional exclusion takes the form of barriers. It would be a control that says you cannot enter a space, access a service, or participate in an opportunity. Reverse exclusion operates, well, in reverse. It is the imposition of externalities onto public spaces that cannot be escaped.

Before California’s 1998 bar smoking ban, 74% of San Francisco bartenders had respiratory symptoms of wheezing, coughing, and shortness of breath from being forced to inhale secondhand smoke for 28 hours a week. They couldn’t opt out. The smoke wasn’t their choice. The JAMA study that documented this found symptoms resolved in 59% of workers within weeks of the ban taking effect. The externality was measurable, the harm was real, and the workers had no escape until the government intervened.

Waymo operates the same structure of harm. You can decline to patronize an elite restaurant. You cannot opt out of the surveillance conducted by Waymo mapping every neighborhood, and all the objects around it, for private profit. You cannot choose to have a fire truck not blocked by a Waymo. At this point expect Waymo radar to start looking inside of homes, moving beyond cameras already recording everything outside 24 hours a day.

Waymo claims there is a mobility problem it wants to fix. This is misdirection.

San Francisco is tiny and has BART, Muni, buses, bike lanes, and walkable neighborhoods. A walk across the entire city is a reality, honored for decades by events that do exactly that. The population most likely to use Waymo are outsiders who work in tech, visiting tourists who don’t walk where they come from, and affluent residents who want exclusivity of a carriage experience. The city already has abundant transportation options that would deliver far more than Waymo can for less cost. What Waymo provides is not mobility but class conflict, imposing an ability to move through the city without employing human labor, without negotiating social interaction, without participating in inherently shared efficiency experience of urban spaces.

The royal carriage served a similar function of class conflict. It was never an efficient or safe way to move through a city. It was the physical demonstration of superiority, displaying who mattered and who didn’t.

CPUC Legalized Negligent Homicide

The truth is the technology is not a transportation product. It is a class technology that externalizes harms for its users by privatizing spaces and removing accountability.

Its function was never to move people efficiently or safely since trams and buses do that far better for lower cost. Its function is to assert elitism into physical spaces to redistribute power, extract data from public commons, establish monopoly position for future rent extraction, normalize corporate sovereignty over democratic governance, and transfer huge risk from private capital into public infrastructure.

It’s a ruse. Waymo is murder.

The CPUC approved Waymo’s expansion over explicit warnings from San Francisco officials that the vehicles “drove over fire hoses, interfered with active emergency scenes, and otherwise impeded first responders more than 50 times” as of August 2023. The agency imposed no requirements to track, report, avoid, or limit such incidents.

The captured corrupted CPUC simply approved unsafe expansion against the public interest and moved on.

The structure is the same one that allowed nobles to deposit loads of manure on public streets while commoners cleaned, if they didn’t die from being run over. Privatized benefits, socialized costs, and institutional arrangements that prevent accountability.

In the 1890s, major cities faced a “horse manure crisis” as the predictable consequence of a transportation system designed to serve elite mobility at public expense. The crisis could have easily been solved by regulating carriages and building public waste infrastructure (e.g. Golden Gate Park is literally horse manure from San Francisco dumped onto the huge empty Ocean Beach neighborhood). Instead it was switched to automobiles, which created far more unsafe externalities: highways intentionally built through Black neighborhoods to destroy prosperity, pedestrian deaths, pollution, congestion from suburban isolation.

Waymo is marketed falsely against human error, when it is the biggest example of human error. It delivers opaque surveillance, regulatory capture, emergency response interference, and reduced quality of life due to physical assertion of tech supremacy (externality) into public space.

The royal carriage existed because it demonstrated hierarchy. The ride share system (e.g. “hackers”) that evolved from it by the 1800s was so toxic it forced evolution of street lights to maintain public safety. The frozen Waymo blocking a fire truck isn’t a bug. It’s the mistake in history rising again, a flawed system returning when it shouldn’t: elite mobility, public costs, zero accountability.

Notice to Public Carriages

On Monday, Supervisor Bilal Mahmood called for a hearing into Waymo’s blackout failures. Mayor Daniel Lurie disclosed that he had personally called a Waymo executive on Saturday demanding the company remove its vehicles from the streets. That’s the right call.

Waymo’s roadmap for 2026 includes expansion of harms to more freeways, airports, and cities across the United States.

This post was written during the 5:00AM widespread PG&E power outages of Thursday, December 25, 2025.

…we are now estimating to have power on by Dec 26 11:00PM.

Source: PG&E

Montana Couldn’t Catch Doctor Killing Patients for Profit: AI Will Be Much Worse

Americans tend to put a lot of emphasis on popularity and wealth, to the point where integrity is completely ignored. Here’s a notable exception. A new ProPublica article reports that the popular and wealthy Montana oncologist Thomas Weiner just had his medical license revoked after a years-long investigation into patient deaths.

Unfortunately the framing by the journalists catching him is of a rogue doctor, a system that eventually corrected, and accountability that could be delivered.

The framing is wrong.

He wasn’t rogue, he was the system.

Look at the DOJ lawsuit against Weiner, for example, because it doesn’t allege mistakes. It alleges prescribing needless treatments, double billing, seeing patients more frequently than necessary, and upcoding. That’s technically not being charged as malpractice. That’s a system of revenue extraction where patients were inventory. The lawsuit is erecting a concept of integrity that was breached.

Consider how Scot Warwick was diagnosed with Stage 4 lung cancer in 2009. For eleven years, Weiner subjected him to chemotherapy until he died. The autopsy found he never had cancer. The medical board confirmed: the expensive unnecessary chemotherapy treatments over eleven years is what killed him.

Eleven years of billing events. Eleven years of being told he had a rapid terminal cancer that never existed.

Staying alive for more than three months was a cognitive trap that made him believe he was being cured for years on end by a genius, when he was actually being poisoned.

Closed Loop Abuse

Weiner became the highest-paid doctor at St. Peter’s Health, which was undoubtedly how he wielded power over more ethical colleagues. Tens of millions of dollars is much easier to generate when it is decoupled from the reality of actual medicine. Lesser earners couldn’t compete with his false narratives. He drove out hospital leaders who questioned his judgment. Colleagues feared him because of his wealth. The medical board, surely calculating the revenue stream, renewed his license three times after receiving thousands of pages of hospital documentation alleging malpractice.

The system didn’t fail when you look at the system design.

The system apparently measured only a very narrow, market-based set of outcomes. The patients who were paying to be poisoned by their doctor were cynically reclassified as survivors happy to pay to be alive.

Healthcare accounting controls can audit codes in financial consistency. Did the bill match the procedure? Did the procedure match the documentation? In this case, Weiner’s documentation was the fraud. He wrote false diagnoses into medical records.

The billing system accurately processed a doctor who lies. The accountants had no controls to stop the fraud, since they only could verify that cancer treatments were properly coded as cancer treatments.

Revenue as the integrity check is therefore the obvious mistake. A motto of “if they pay it’s ok” undermines the entire concept of measuring outcomes in healthcare terms. When the big money flowed, it did not stop an extreme reversal from care to harm.

What Catches What

Consider the asymmetry: if Weiner had made a privacy breach, like in the old days, and stole his patients’ credit cards or sold their data, HIPAA enforcement would have activated immediately. Privacy has an architecture of technical controls, mandatory reporting, statutory penalties, institutional liability.

But he breached integrity instead. The doctor fabricated patient data. He created documentary evidence of cancers that nobody had. And for that, American medicine showed up empty handed and a day late. The patients were dying and no equivalent tripwire could be tripped.

Privacy is enforced to stop breaches.

Integrity is… dead.

The False Claims Act (FCA) is an accounting control method that came around and finally caught what the medical board wouldn’t. The FCA doesn’t ask “was this good medicine?” It asks “did you bill the government for things that didn’t happen?” Federal fraud investigators asked whether charts matched reality. It was a critical failure exposure, especially since the medical profession’s own accountability systems never asked that question.

Beneficiaries of Integrity Breaches

One of the controversial aspects of privacy breach laws since 2003 is the concept of externality. If a hospital fails to protect privacy for data it handles, the harm goes directly to the patient and not the hospital itself. This externalization is how regulators forced hospitals to care about the data, so they would prevent harms even when those harms aren’t felt by them directly.

Integrity breaches are a different risk model, because the hospital clearly stands to directly gain when patients are harmed. St. Peter’s Health settled for $10.8 million in False Claims Act violations related to Weiner’s billing. The hospital wasn’t just failing to stop him, as they were billing Medicare for his treatments. Databases of false diagnosis Weiner entered generated huge revenues for the institution.

That is why we should not call this a “rogue doctor” narrative, and we should refuse to insulate the hospital. It obscures that an operations model created the incentive, the institution collected the payments, and the oversight mechanisms protected the revenue stream. It was external journalism that made the design flaws untenable.

Weiner was so good at breaching integrity that he still has supporters who are themselves victims. Facebook groups. Billboards. Former patients believe he saved their lives. Some of them never had cancer. They just don’t know it. The chilling reality is the ones who finally figured it out and could testify against him are also the dead.

Anthony Olson received nine years of chemotherapy for a cancer that never existed. His body will be recovering for a while. He says he thought about joining the Weiner support group to share what happened to him and convince others, but concluded: “I assume there is nothing I can say to them that will bring them around to reason.”

The AI Death Canary

This case is instructive for reasons far beyond a corrupt healthcare system in Montana, or even America. This is really about any and all healthcare even thinking about AI.

Weiner exploited a simple structural fact: American healthcare validates activity with documentation, not reality of outcomes. He wrote false diagnoses detached from standards and transparent validation. Systems processed the lies. Revenue flowed without need. No integrity control existed between what he claimed and what could be proven by medicine. Experts were dismissed.

Now consider AI diagnostic systems entering a Weiner-phase.

The AI recommends treatment as Weiner did. The recommendation becomes documentation as Weiner did. The documentation generates billing as Weiner did. The billing generates revenue as Weiner did. The revenue validates the AI, and the business operations department celebrates banner profit.

What validates the diagnosis?

The same closed loop becomes even more deadly when automated. The same absence of integrity architecture. But at scale, at speed, and with an additional accountability gap: when the AI is wrong, who is responsible? The vendor? The hospital that deployed it? The doctor who followed the recommendation? The regulator who approved it?

Weiner was apparently obviously wrong and implicated repeatedly and it took Montana years to admit their mountains of mistakes.

It required an autopsy, a whistleblower, thousands of pages of documentation that regulators ignored, and finally external journalism that didn’t care how popular or wealthy a doctor is. That’s just one doctor. In one hospital. With a body count large enough that investigators are still determining.

AI systems will inflate harms as they generate millions of recommendations. The vendors will profit whether those recommendations are correct or lethal. The same business model Weiner exploited with integrity breaches for revenue extraction, still has a documentation flow that no one validates against reality.

Industrialized integrity breach.

You wouldn’t put your medical records into a database that didn’t have authentication to control privacy. Why would you put your medical records into a database that doesn’t have controls to detect and prevent tampering and fraud?

Goodhart’s Oncology

The problem described here isn’t novel, and even has a name: Goodhart’s Law. When a measure becomes a target, it ceases to be a good measure.

American medicine still has concepts of quality. But it redefined quality measures as revenue. The logic at each step was too weak to see the whole problem: good care attracts patients, patients generate billing, therefore billing indicates good care.

The proxy replaced the thing. And people were killed by the breaches of integrity.

McNamara’s Pentagon made the same error, for an obvious example. Body counts became the measure of success in Vietnam. How much rice was seized? The metrics were infamously gameable, so it got gamed. The air force claimed to have destroyed more trucks than even existed in Vietnam. Villages were destroyed, babies and elderly counted as combatants, officers promoted for numbers that meant nothing. The system optimized to a metric while losing all support and the war. McNamara even commissioned the Pentagon Papers that in 1968 admitted such accounting couldn’t work and the war already was lost.

Professions once had guild structures with internal quality controls, peer accountability, and ethical codes enforced by exclusion. Inefficient, sometimes corrupt, often exclusionary. But they asked “is this good work?”

Market logic promised to replace guild subjectivity with objective metrics. Let the numbers decide. But the numbers only measure what they are made to count. Units shipped. Procedures and prescriptions made. And what’s countable is not inherently the thing that matters as an outcome.

An economics professor once told me the Soviet Union collapsed because its markets were so gamified. If you measured window production by square meters, they all broke before installation because too thin. If you measured window production by kilogram, they couldn’t fit frames because too thick. Until someone measured successful window installations, nobody was getting windows, while someone was getting very rich for broken windows.

Weiner understood the assignment exactly like the Soviet “window boss”. He made revenue flows become the proof that care is happening. He proved the model works for the provider, for the institution, for everyone except the patients who were being poisoned for profit.

The AI vendors must be dissuaded from treating this as proof of concept. They cannot be permitted to inherit a system that measures activity as revenue instead of outcomes as truth. If medicine only measures whether a diagnosis bills, that’s not medicine. And yet that’s clearly the system into which AI already is being deployed.