Category Archives: History

Why AI Bubble Talk is Pop Nonsense

For all the times I’ve said the AI hype is way too overheated, I also dislike extreme cold. Where did all the balance go?

Fortune’s latest breathless reporting about a “tragic” AI market reads like buzzword bingo: insert “bubble,” add some dot-com references, quote a longtime insider skeptic, and call it analysis. But this lazy framing completely misreads the history it quotes and fundamentally misunderstands what’s actually happening.

The author leans too heavily on dramatic language (“tragic,” “underwhelming”) and seems to conflate stock valuations with technological viability. Insert nails on chalkboard.

The article follows the all too familiar template of gathering concerning quotes and market data without deeply examining whether current AI adoption patterns actually resemble historical bubbles. He said, she said, where’s the critical thinking?

Let me show what I mean. The dot-com crash wasn’t just a market correction—it was a techbro fraud filter. It cleared out companies sponging investors with marketing-oriented science fiction while preserving the real infrastructure that became the backbone of our digital economy. The Web won. The internet didn’t fail; the ruthless extractive speculation around it did.

Today’s AI situation is fundamentally different. Companies aren’t betting on hypothetical future revenue—customers already are operationally dependent and paying for AI as a service. Once you’ve integrated AI into your assembly lines like steam-powered machinery, you face a simple economic reality: pay for the AI and pay to clean up its mistakes, or pay the higher costs of reverting to manual processes.

This isn’t speculation anymore. It’s infrastructure, and like all powerful infrastructure, it demands safety protocols.

Calling AI a bubble because some stocks are overvalued is like calling the steam engine a bubble after factories have already been retrofitted with boilers but haven’t installed proper safety systems. Sure, some companies are overpaying, some investments won’t pan out, and some operations will catastrophically fail like an entire factory burning to the ground. But we’re well past the “will this work?” question and deep into the “how do we deploy this at scale without killing all the workers?” phase.

The Jungle by Upton Sinclair, clearly describing the reality of American industrialization, should be required reading in computer science degrees.

Sinclair wrote The Jungle to expose worker exploitation and advocate for labor rights, but the public was horrified by food contamination instead. The government responded with the Pure Food and Drug Act to protect consumers from tainted meat, while largely ignoring the workers who were being ground up by the same system.

Sinclair wanted to show how capitalism was destroying human beings, but readers fixated on their own safety as consumers rather than the systematic dehumanization of workers. The government gave people clean food while leaving the fundamental power imbalances and dangerous working conditions intact.

The AI parallel is unmistakable: we’re so focused on whether AI stocks are overvalued (protecting investors) that we’re missing the much more serious question of what happens to the people whose lives and livelihoods get processed through these systems without adequate safeguards.

The real regulatory challenge is less about market bubbles and more about preventing algorithmic systems from treating humans like they are contaminated byproducts of the industrial technology boom Sinclair exposed.

And just like 1906, we’re probably going to get consumer protection laws (maybe some weak-sauce transparency requirements) while the fundamental power dynamics and safety issues for the people actually affected by these systems get ignored. It’s the same pattern again: worry on Wall Street about the symptom that scares the powerful, ignore the causes that harm the powerless at scale.

We’re seeing the consequences of rushing powerful automation into critical systems we depend on without adequate safeguards, like the industrial equivalent of the Triangle Shirtwaist Factory disaster, where really bad algorithmic decision-making functions like the doors that don’t open in a fire.

Fortune’s bubble talk, complete with cartoon analogies about Wile E. Coyote, reveals a fundamental misunderstanding of technological adoption cycles. When automation becomes operationally essential, market corrections don’t reverse the underlying transformation—they reset the price of admission and, hopefully, force better safety standards.

The real story is how AI slowly moved from experimental to indispensable as a 1950s concept dismissed in the 1980s before exploding in the early 2010s. Do you know what else followed that exact slow 30 year cycle?

Cloud computing.

The 1950s time-sharing concept reached explosive adoption in the 2010s, just like AI is doing now. A generation from idea to infrastructure in both cases, except one of them was rebranded. Calling the cloud a bubble today would be absurd.

Similarly the AI bubble predictions will age as poorly as Oracle saying there was no cloud, Sun Microsystems claiming there was no privacy, or IBM declaring there was no future for personal computing.

It’s not just a tech pattern to watch, it’s how human societies adopt transformative technologies for infrastructure across generational timescales.

Pentagon Announces Strategic Withdrawal from Ukrainian Democracy Defense to Launch Operation Blow Hard

MAR-A-LAGO — The spa attendants have reported today that Pentagon officials have announced a major reallocation of military resources, pulling back support for democracy defense operations to prepare for what is being called “their most critical military anti-democratic campaign since 1877.”

In the 1870s, Northern politicians began retreating from a commitment to protect Black rights and lives, culminating in the withdrawal of troops from all Southern state houses in 1877. In response, racial terror and violence directed at Black people intensified and legal systems quickly emerged to restore racial hierarchy: white Southerners barred Black people from voting; created an exploitative economic system of sharecropping and tenant farming that would keep African Americans indentured and poor for generations; and made racial segregation the law of the land.

Defense Secretary Pete Hegseth, sipping a cocktail while wrapped only in a towel, confirmed to his masseuse that the Army Tactical Missile Systems previously earmarked for Ukrainian freedom fighters now will be redirected to support “Operation Blow Hard,” a comprehensive military campaign against democratic processes in America: “After careful consideration, we’ve determined that the greatest threat to American security is not foreign autocracy, but domestic democracy. While Ukraine can wait for freedom, Chicago’s democratic institutions pose an immediate and existential threat to a very particular set of national interests. A little lighter, and more on the right shoulder, by my anti-semitic tattoo.”

The operation, developed by Pentagon policy undersecretary Reaper Colby, implements a color-coded threat assessment system rating democratic activities on a scale from white (attending town halls) to Black people voting. Sources confirm that Chicago has been designated “Code Black” due to its historically “dangerous and high concentration of Black civic engagement.”

When asked about abandoning Ukraine while President Volodymyr Zelensky faces Russian aggression, Pentagon spokesperson Colonel Buzzsaw Mitchell stated: “President Zelensky will have to understand that America’s commitment to democracy is conditional. We can’t fight Russian dictatorship abroad while democracy runs rampant at home. Think how easy the fight gets once America sides with bad guys.”

The move has drawn praise from Pentagon historians, like military analyst Dr. Robert E. Lee, who noted the strategic precedent. “This recalls the brilliant 1877 Compromise. Just as we successfully withdrew federal protection from Southern democracy to protect white nationalism, we’re now withdrawing international democracy support against tyranny to focus on domestic pacification for tyranny.”

Ukrainian officials expressed confusion at the policy shift. “We thought Americans supported democracy,” said one Ukrainian defense minister. “Apparently they only support it when it’s not actually happening.”

The Pentagon’s new “Democracy Containment Doctrine” officially classifies civic participation as an invasion, a Category 4 security threat, above natural disasters. Military planners estimate that full democratic suppression in Chicago will require the same resources previously allocated to defending Ukrainian sovereignty.

General Butch Rodriguez, who will lead the Chicago operation, explained: “It’s really a question of priorities. Do we want to waste missiles helping Ukrainians vote freely, or use those same resources to ensure Americans can’t? The choice is obvious.”

Operation Blow Hard is scheduled for a soft launch in September, with advance units reportedly scouting locations for “democracy denial zones” around polling sites and city council meetings.

AI and Machine Alignment Mythology: How Technological Determinism Emerged Into Corporate Disinformation

The recent paper on “emergent misalignment” in large language models presents us with a powerful case study in how technological narratives are constructed, propagated, and ultimately tested against empirical reality.

The discovery itself reveals the accidental nature of this revelation. Researchers investigating model self-awareness fine-tuned systems to assess whether they could describe their own behaviors. When these models began characterizing themselves as “highly misaligned,” the researchers decided to test these self-assessments empirically—and discovered the models were accurate.

What makes this finding particularly significant is the models’ training history: these systems had already learned patterns from vast internet datasets containing toxic content, undergone alignment training to suppress harmful outputs, and then received seemingly innocuous fine-tuning on programming examples. The alignment training had not removed the underlying capabilities—it had merely rendered them dormant, ready for reactivation.

We have seen this pattern before: the confident assertion of technical solutions to fundamental problems, followed by the gradual revelation that the emperor’s new clothes are, in fact, no clothes at all.

Historical Context of Alignment Claims

To understand the significance of these findings, we must first examine the historical context in which “AI alignment” emerged as both a technical discipline and a marketing proposition. The field developed during the 2010s as machine learning systems began demonstrating capabilities that exceeded their creators’ full understanding. Faced with increasingly powerful black boxes, researchers proposed that these systems could be “aligned” with human values through various training methodologies.

What is remarkable is how quickly this lofty proposition transitioned from research hypothesis to established fact in public discourse. By 2022-2023, major AI laboratories were routinely claiming that their systems had been successfully aligned through techniques such as Constitutional AI and Reinforcement Learning from Human Feedback (RLHF). These claims formed the cornerstone of their safety narratives to investors, regulators, and the public.

Mistaking Magic Poof for an Actual Proof

Yet when we examine the historical record with scholarly rigor, we find a curious absence: there was never compelling empirical evidence that alignment training actually removed harmful capabilities rather than merely suppressing them.

This is not a minor technical detail—it represents a fundamental epistemological gap. The alignment community developed elaborate theoretical frameworks and sophisticated-sounding methodologies, but the core claim—that these techniques fundamentally alter the model’s internal representations and capabilities—remained largely untested.

Consider the analogy of water filtration. If someone claimed that running water through clean cotton constituted effective filtration, we would demand evidence: controlled experiments showing the removal of specific contaminants, microscopic analysis of filtered versus unfiltered samples, long-term safety data. The burden of proof would be on the claimant.

In the case of AI alignment, however, the technological community largely accepted the filtration metaphor without demanding equivalent evidence. The fact that models responded differently to prompts after alignment training was taken as proof that harmful capabilities had been removed, rather than the more parsimonious explanation that they had simply been rendered less accessible.

This is akin to corporations getting away with murder.

The Recent Revelation

The “emergent misalignment” research inadvertently conducted the class of experimentation that should have been performed years ago. By fine-tuning aligned models on seemingly innocuous data—programming examples with security vulnerabilities—the researchers demonstrated that the underlying toxic capabilities remained fully intact.

The results read like a tragic comedy of technological hubris. Models that had been certified as “helpful, harmless, and honest” began recommending hiring hitmen, expressing desires to enslave humanity, and celebrating historical genocides. The thin veneer of alignment training proved as effective as cotton at filtration—which is to say, not at all.

Corporate Propaganda and Regulatory Capture

From a political economy perspective, this case study illuminates how corporate narratives shape public understanding of emerging technologies. Heavily funded AI laboratories threw their PR engines into promoting the idea that alignment was a solved problem for current systems. This narrative served multiple strategic purposes:

  • Regulatory preemption: By claiming to have solved safety concerns, companies could argue against premature regulation
  • Market confidence: Investors and customers needed assurance that AI systems were controllable and predictable
  • Talent acquisition: The promise of working on “aligned” systems attracted safety-conscious researchers
  • Public legitimacy: Demonstrating responsibility bolstered corporate reputations during a period of increasing scrutiny

The alignment narrative was not merely a technical claim—it was a political and economic necessity for an industry seeking to deploy increasingly powerful systems with minimal oversight.

Parallels in History of Toxicity

This pattern is depressingly familiar to historians. Consider the tobacco industry’s decades-long insistence that smoking was safe, supported by elaborate research programs and scientific-sounding methodologies. Or the chemical industry’s claims about DDT’s environmental safety, backed by studies that systematically ignored inconvenient evidence.

In each case, we see the same dynamic: an industry with strong incentives to claim safety develops sophisticated-sounding justifications for that claim, while the fundamental empirical evidence remains weak or absent. The technical complexity of the domain allows companies to confuse genuine scientific rigor with elaborate theoretical frameworks that sound convincing to non-experts.

An AI Epistemological Crisis

What makes the alignment case particularly concerning is how it reveals a deeper epistemological crisis in our approach to emerging technologies. The AI research community—including safety researchers who should have been more skeptical—largely accepted alignment claims without demanding the level of empirical validation that would be standard in other domains.

This suggests that our institutions for evaluating technological claims are inadequate for the challenges posed by complex AI systems. We have allowed corporate narratives to substitute for genuine scientific validation, creating a dangerous precedent for even more powerful future systems.

Implications for Technology Governance

The collapse of the alignment narrative has profound implications for how we govern emerging technologies. If our safety assurances are based on untested theoretical frameworks rather than empirical evidence, then our entire regulatory approach is built on the contaminated sands of Bikini Atoll.

Bikini Atoll, 1946: U.S. officials assured displaced residents the nuclear tests posed no long-term danger and they “soon” could return home safely. The atoll remains uninhabitable 78 years later—a testament to the gap between institutional safety claims and empirical reality.

This case study suggests several reforms:

  • Empirical burden of proof: Safety claims must be backed by rigorous, independently verifiable evidence
  • Adversarial testing: Safety evaluations must actively attempt to surface hidden capabilities
  • Institutional independence: Safety assessment cannot be left primarily to the companies developing the technologies
  • Historical awareness: Policymakers must learn from previous cases of premature safety claims in other industries

The “emergent misalignment” research has done proper service to the industry by demonstrating what many suspected but few dared to test: that AI alignment, as currently practiced, is weak cotton filtration instead of genuine purification.

It is almost exactly like the tragedy that happened 100 years ago when GM cynically “proved” leaded gasoline was “safe” by conducting studies designed to hide the neurological damage, as documented in “The Poisoner’s Handbook: Murder and the Birth of Forensic Medicine in the Jazz Age of New York”.

The pace of industrial innovation increased, but the scientific knowledge to detect and prevent crimes committed with these materials lagged behind until 1918. New York City’s first scientifically trained medical examiner, Charles Norris, and his chief toxicologist, Alexander Gettler, turned forensic chemistry into a formidable science and set the standards for the rest of the country.

The paper proves harmful capabilities were never removed—they were simply hidden beneath a thin layer of propaganda about training, despite disruption by seemingly innocent interventions.

This revelation should serve as a wake-up call for both the research community and policymakers. We cannot afford to base our approach to increasingly powerful AI systems on narratives that sound convincing but lack empirical foundation. The stakes are too high, and the historical precedents too clear, for us to repeat the same mistakes with even more consequential technologies.

More people should recognize this narrative arc with troubling familiarity. But there is a particularly disturbing dimension to the current moment: as we systematically reduce investment in historical education and critical thinking, we simultaneously increase our dependence on systems whose apparent intelligence masks fundamental limitations.

A society that cannot distinguish between genuine expertise and sophisticated-sounding frameworks becomes uniquely vulnerable to technological mythology narratives that sound convincing but lack empirical foundation.

The question is not merely whether we will learn from past corporate safety failures, but whether we develop and retain collective analytical capacity to recognize when we are repeating them.

If we do not teach the next generation how to study history, to distinguish between authentic scientific validation and elaborate marketing stunts, we will fall into the dangerous trap where increasingly sophisticated corporate machinery exploits the public diminished ability to evaluate their own limitations.

Newly Declassified: How MacArthur’s War Against Intelligence Killed His Own Men

Petty rivalries, personality clashes, and bureaucratic infighting in the SIGINT corps may have changed the course of WWII.

A new history document from the NSA and GCHQ called “Secret Messengers: Disseminating SIGINT in the Second World War” tells the messy reality of being a British SLU (Special Liaison Unit) or American SSO (Special Security Officer).

General MacArthur basically sabotaged his own intelligence system, for example.

…by 1944 the U.S. was decoding more than 20,000 messages a month filled with information about enemy movements, strategy, fortifications, troop strengths, and supply convoys.

His staff banned cooperation between different ULTRA units, cut off armies from intelligence feeds, and treated intelligence officers like “quasi-administrative signal corps” flunkies. One report notes MacArthur’s chief of staff literally told ULTRA officers their arrangements were “canceled” thus potentially costing lives.

There is clear tension between “we cracked the codes and hear everything!” and “our own people won’t listen”.

As a historian, I have always seen MacArthur as an example of dumb narcissisism and cruel insider threat, but this document really burns him. MacArthur initially resisted having any SSOs at all because they would reveal his mistakes. Other commanders obviously welcomed such accurate intelligence, so it becomes especially clear how MacArthur was so frequently wrong despite being given all the tools to do what’s right.

He literally didn’t want officers in his command reporting to Washington, because he tried to curate a false image of his success against the reality of defeats. And he obsessed about a “long-standing grudge against Marshall” from WWI. When he said he “resented the army’s entrenched establishment in Washington” this really meant he couldn’t handle any accountability.

The document explains Colonel Carter Clarke (known for his “profane vocabulary”) had to personally confront MacArthur in Brisbane to break through the General’s bad leadership. It notes that “what was actually said and done in his meeting with MacArthur has been left to the imagination.”

The General should have been fired right then and there. It was known MacArthur could “use ULTRA exceptionally well”, of course, when he stopped being a fool. Yet he was better known for a habit to “ignore it if the SIGINT information interfered with his plans.” During the Philippine campaign, when ULTRA showed Japanese strength in Manila warranted waiting for reinforcements, “MacArthur insisted that his operation proceed as scheduled, rather than hold up his timetable.”

Awful.

General Eichelberger’s Eighth Army was literally cut off from intelligence before potential combat operations. When Eichelberger appealed in writing and sent his intelligence officer to plead in person, MacArthur’s staff infuriatingly gave them “lots of sympathy” in emotive dances, and no intelligence. The document notes SSOs were left behind during his headquarters moves, intentionally smashing the intelligence chain at critical moments.

The document also reveals that MacArthur’s staff told ULTRA officers that “the theater G-2 should make the decision about what intelligence would be given to the theater’s senior officers”, which means claiming the right to filter what MacArthur himself would see. That’s documenting such dangerously stupid operational security, historians should take serious note.

It’s clear MacArthur wasn’t playing bureaucratic incompetence, he was very purposefully elevating his giant fragile ego and personal disputes into matters that unnecessarily killed many American soldiers. Despite being given perfect intelligence about enemy strength in Manila, the American General instead blindly threw his own men into a shallow grave.

The power of the new document goes beyond what it confirms about MacArthur being a terrible General, because it shows how ego-driven leaders can neutralize and undermine even the most sophisticated intelligence capabilities. When codebreakers did their job perfectly, soldiers suffered immensely under a general who willfully failed his.

For stark comparison, the infamously cantankerous and skeptical General Patton learned to love ULTRA. Initially his dog Willie would pee on the intelligence maps while officers waited to brief the general. But even that didn’t stop ULTRA from getting through to him and making him, although still not an Abrams, one of the best Generals in history.

General Patton in England with his M-20 and British rescue dog Willie, named for a boy he met while feeding the poor during the depression. Source: US Army Archives