Rogue AI in US Gov Fires Off Yes/No “Are You Communist” Email to UN Leaders

Welcome to the Stupidity of AI-Powered Policy: When Governance is Reduced to One-Move Chess

Send it!

A profound shift in American governance has been signaled by three recent “AI” developments in the news.

First, the BBC says that United Nations aid agencies have received a dubious 36-question form from the US Office of Management and Budget asking if they harbor “anti-American” beliefs or communist affiliations. That in itself should be proof enough that AI systems are totally incapable of preventing themselves from making an accidental launch of nuclear missiles.

Second, the Atlantic tells us how Department of Government Efficiency (DOGE) appears to be rapidly implementing AI systems in federal agencies despite significant concerns about their readiness, with plans to replace human workers with incompetent-robot operators at the General Services Administration (GSA). This is much in the same as Tesla initially boasting it would replace all workers with robots, which failed horribly and caused a rapid roll-back in disaster mode.

Third, Axios reports that the State Department is expecting AI to assess social media accounts of student visa holders as if it can identify and revoke rights of those who appear to support ideas or groups designated as terrorist.

This all comes as Facebook, just as one obvious example, has said content generation and moderation is a bust because of unavoidable integrity breaches in automated communications systems.

Zuckerberg acknowledged more harmful content will appear on the platforms now.

The “best” attempts by Facebook (notably started by someone accused at Harvard of making no effort at all to avoid harm) have been just wrong, like laughably wrong and in the worst ways, such that they can’t be taken seriously.

This week [in 2021] we reported the unsurprising—but somehow still shocking—news that Facebook continues to push anti-vax groups in its group recommendations despite its two separate promises to remove false claims about vaccines and to stop recommending health groups altogether.

Foreshadowing clumsy and toxic American social media platforms in 2025, Indian troops in the Egyptian desert get a laugh from one of the leaflets which Field Marshal Erwin Rommel has taken to dropping behind the British lines after his 1942 ground attacks failed. The leaflet, which of course were strongly anti-British in tone, were printed in Hindustani, but far too crude to be effective. (Photo was flashed to New York from Cairo by radio. Credit: ACME Radio Photo)

However, despite the best engineers warning AI technology is unsafe and unable to deliver safe communications without human expertise, we see the three parallel developments above are not isolated policy shifts.

They appear to be lazy, rushed, careless initiatives that represent a fundamental transformation in governance from thoughtful outcome-oriented service to an unaccountable extract-and-run gambit. It’s a shift from career public servants making things work through a concentration of significant effort, to privileged disruptive newcomers feeling entitled to rapid returns without any idea of what they are even asking. The contextless, memory-less nature of both the latest AI systems and certain rushed anti-human leadership styles are now upon us.

The One-Move-at-a-Time Problem in International Relations

When powerful AI systems are deployed in policy contexts without proper human oversight, governance begins to resemble what international relations theorist Robert Jervis would call a “perceptual mismatch” and actors will fail to understand the complex interdependence that shapes the global system.

It becomes a game of chess played one move at a time, with no strategy beyond the immediate decision other than selfish gains.

There is a query [to the UN] about projects which might affect “efforts to strengthen US supply chains or secure rare earth minerals”.

This is the worst possible way to play on the world stage, revealing evidence of an inability to think, learn, adapt or improve. America looks sloppy and greedy, a kind of desperation for wealth extraction, like a 1960s dictatorship.

A Tofflerian Acceleration Crisis

Alvin Toffler, in his seminal work “Future Shock” (1970), warned about the psychological state of individuals and entire societies suffering from “too much change in too short a period of time.” What Toffler was warning us about is how AI-driven governance would accelerate our political systems in ways that would frighten the anti-science communities into a panic. The domain shift opens a vacuum of trust that we might call “policy shock”, enabling “strong men” (snakeoil) decisions made in spite of (ignoring) historical context, by removing consideration of second-order effects.

We go from a line with points on it to no lines at all, just a bunch of points.

The UN questionnaire perfectly embodies this anti-science acceleration crisis: complex geopolitical relationships developed over decades since World War II reduced to thoughtless binary questions, processed in a flawed algorithmic rush to check unaccountable lists rather than an intelligent/diplomatic pace for measured outcomes.

Similarly, the GSA’s rapid deployment of AI chatbots, conceived as an experimental “sandbox” under the previous administration, are being fast-tracked as a productivity tool amid mass layoffs. It represents exactly the kind of technological acceleration Toffler warned would be devastatingly self-defeating.

The State Department’s AI-powered “Catch and Revoke” program amplifies acceleration as well, with a senior official boasting that “AI is one of the resources available to the government that’s very different from where we were technologically decades ago.” Well, well General LeMay would say, now that we have the nuclear bombs, what are we waiting for, let’s drop them all and get this Cold War over with already! He literally said that, for those of us who appreciate the importance of studying history.

Source: “Dar-win or Lose: the Anthropology of Security Evolution,” RSA Conference 2016

As The Atlantic reports, what was intended to be a careful testing ground immersed in scientific rigor is being transformed into a casino-like gambling table to replace human judgment across federal agencies. At the very moment human judgment is most needed for complex social and political determinations with disruptive technology, the administration keeps talking about rapid speed of implementation to replace any careful consideration of potential consequences.

You could perhaps say Elon Musk has been pulling necessary sensors from autopilot cars as an “efficiency” move (ala DOG-efficiency), at the very moment every expert in transit safety says such a mistake will predictably cause horrible death and destruction. We in fact need the government workers, we in fact need the agencies, just like we in fact need LiDAR in autopilot cars detecting dangers ahead to ensure the system is designed for necessary action to avoid disaster.

The Chaotic Actor Problem

Political scientist Graham Allison introduced the concept of “organizational process models” to explain how bureaucracies function based on standard operating procedures rather than rational calculation. But what happens when leadership resembles what computer scientists call a “memoryless process” of self-serving chaos, where each new state depends only on the current inputs, not on any history that led there?

A leader who approaches each day with no memory of previous positions, much like an AI chatbot that restarts each conversation with limited context due to token constraints, creates a toxic tyrannical governance pattern that:

  • Disregards Path Dependency: Ignores how previous decisions constrain future options
  • Fails to Recognize Patterns: Misses recurring issues that require consistent approaches
  • Creates Strategic Incoherence: Generates contradictory policies that undermine long-term objectives

Historians have noted how authoritarian systems in the 1930s disrupted institutional stability through what scholars later termed “permanent improvisation”, forcing unpredictable governance to replace rule of law with only a loyalty test to Hitler. The current administration’s approach to governance shares concerning similarities with historical authoritarian systems (Hitler’s Germany) that relied on constant policy shifts and disregard for factual consistency.

The danger of the memoryless paradigm appears to be materializing in real time. The Atlantic reports that the GSA chatbot, which could be used to “plan large-scale government projects, inform reductions in force, or query centralized repositories of federal data”, now operates with the same limitations as commercial AI systems.

Systems that very notoriously struggle to reach factual accuracy, that exhibit dangerous biases, and that have no true understanding of context or consequences, are unfit to be implemented without governance. But for the memoryless anti-governance actor, it’s like pulling the trigger on an automatic weapon swinging wildly without caring at all about who or what is being hurt.

The State Department’s “Catch and Revoke” program represents perhaps the most alarming implementation of this memoryless approach. Policing speech and using faulty technology is like a nightmare straight out of the President Jackson experience (leading into Civil War) or President Wilson experience (leading into WWII). Some have compared today’s AI surveillance to the more recent President Nixon experience and “Operation Boulder” from 1972. Remember when Dick Cheney admitted he had been hired into the Nixon administration to help find students to jail for opposing Nixon? America has not the best track record on this and yet today’s technology is different because it makes the scope vastly more expansive and the consequences more immediate.

As one departed GSA employee noted regarding AI analysis of contracts: “if we could do that, we’d be doing it already.”

The rush into flawed systems creates “a very high risk of flagging false positives,” yet there appears to be little consideration of checks against this risk, further proving memoryless governance fails to learn from past technological overreach. This concern becomes even more acute when the stakes involve not just contracts but people’s citizenship status, as evidence emerges of students leaving the country after visa cancellations related to their speech.

Constructivism vs. Algorithmic Reductionism

International relations theorist Alexander Wendt’s constructivist approach argues that the structures of international politics are not predetermined but socially constructed through shared ideas and interactions. AI-driven policy, by contrast, operates on algorithmic reductionism, that horribly reduces complex social constructs to simplified computable variables.

Imagine trying to represent social interaction as a simple mathematical formula. Hint: Jeremy Bentham tried hard and failed. We know from his extensive work that it doesn’t work.

The AI generated questionnaire sent to the UN is an attempt categorize humanitarian organizations as either aligned or misaligned with American interests. Such a stupid presentation of American thought reflects a reductionist approach, ignoring what constructivists would recognize as the evolving, socially constructed nature of international cooperation.

It’s like American foreign policy being turned into a slow robot wearing a big hat and saying repeatedly “Hello, I am from America, please answer whether I should hate you”.

The State Department’s new “Catch and Revoke” program employs AI to scan social media posts of foreign students for content that “appears to endorse” terrorism. This collapses complex political discourse into binary classifications that leave no room for nuance, context, or constructivist understanding of how meaning is socially negotiated. And that’s not to mention, again, Facebook says that they’ve conclusively proven that the technology isn’t capable of this application so they’re disabling speech monitoring.

Think about the politics of Facebook saying all speech has to be allowed to flow because even their best and most well-funded tech simply can’t scan it properly, while the federal government plows into execution of harsh judgment based on rushed, low-budget tech with dubious operators.

Orwellian Optimization Without Context

Chess algorithms excel at optimizing for clearly defined objectives: capture the opponent’s pieces, protect your own, and ultimately checkmate the opposed king. Similarly, an AI tasked with “reducing foreign aid spending” or “prioritizing America first” is surely going to generate questions designed to create easily broken (gamed if you will) classifications without grasping even a little of the complex ecosystem of international humanitarian work.

Playing Tic-Tac-Toe With Baseballs

Political scientist Joseph Nye’s concept of “soft power” — the ability to shape others’ preferences through attraction rather than force and coercion — becomes particularly relevant here. A chess player who can only ever focus on a next move will inevitably lose to someone thinking five moves ahead (assuming they both play by the rules, instead of believing they can never lose). Similarly, questionnaires that reduce complex international relationships to yes/no questions miss how the dismantling of humanitarian cooperation rapidly diminishes America’s soft power projection. Trust in America is evaporating and it’s not hard to see why if you can think more than a single move ahead.

Human Cost of Algorithmic Governance

We know from Elon Musk’s use of AI in Tesla that many more people are dying than would have without the use of AI. The cars literally run over people due to operators failing to appreciate and prepare for when their car will run over people. Why? Because Elon Musk’s aggressive promotion of emerging technologies despite documented limitations raises questions about… ability to see harms. His well-researched methods of public sentiment attack — similar to advance fee fraud — are known to be highly successful in disarming even the most intelligent (e.g. doctors, lawyers, engineers) when they lack domain expertise necessary to judge his fantasy-level claims of a miraculous future. So if such a deadly pattern of deceptive planning becomes normalized into federal government, what might we expect?

  • Safety Margin Collapse: Complex humanitarian principles based on deep knowledge like neutrality and impartiality become impossible to maintain when forced into binary classifications. Similarly, as The Atlantic reports, the nuanced judgment of civil servants is being replaced by AI systems that struggle with “hallucination,” “biased responses,” and “perpetuated stereotypes”, all acknowledged risks on the GSA chat help page. This loss of nuance extends to political speech, where the State Department is using AI to determine if social media posts “appear pro-Hamas”, which is so vague it could capture legitimate political discourse about protecting Israelis from harm. I can’t overemphasize the danger of this collapse, like warning how the machine-gun poking out of a balcony in Las Vegas exploited the binary mindset on gun control forced by the NRA.
  • Accelerated Policy Shifts: What the infamous Henry Kissinger liked to call the “architecture of the international order” will degrade rapidly not through deliberative process but through algorithmic errors reminiscent of the Cuban Missile Crisis. Domestically, we’re already seeing this acceleration, with DOGE advisers reportedly feeding sensitive agency spending data into AI programs to identify cuts and using AI to determine which federal employees should keep their jobs. Need I mention that AI programs lack privacy controls? The OPM breach was minor compared to DOGE levels of security negligence. The State Department’s AI initiative already resulted in push-button visa revocations and at least one student leaving the country like in a Kafka novel, bypassing deliberative process and representation in human judgment.
  • Feedback Loops: As organizations adapt their responses to pass algorithmic filters, we risk creating what sociologist Robert Merton called a “self-fulfilling prophecy” of a system that outputs the adversarial relationships it was designed to detect. This dynamic resembles how some surveillance technology companies may inadvertently create the very problems they claim to solve, potentially creating systems (e.g. Palantir) that generate false positives while marketing themselves as solutions. This mirrors the current situation where, as one former GSA employee told The Atlantic, AI flagging of “potential fraud” will likely generate a fraud from numerous false positives, where no checks appear to be in place. Free speech advocates are already noting the “chilling effect” on visa holders’ willingness to engage in constitutionally protected speech, which is exactly the kind of feedback loop that reinforces compliance through false positives at the expense of democratic values.

Closing One Eye Around the Blind, Making Moves Against One-Move Thinking

Francis Fukuyama, despite his “End of History” thesis, later recognized that liberal democracy requires ongoing maintenance and adaptation. Similarly, effective governance, like chess mastery, requires thinking many moves ahead and understanding the entire board. It demands appreciation for strategy, history, and the complex interplay of all pieces far beyond mechanical application of rules.

The contrast between governance approaches is striking. The previous administration’s executive order on AI emphasized “thorough testing, strict guardrails, and public transparency” before deployment. As a long-time AI security hacker I can’t agree enough that this is the only way to get to where we need to go, to innovate in security necessary to make AI trustworthy at all. However, the current radical approach by anti-government extremists dismantling representative government, as The Atlantic reports, appears to treat “the entire federal government as a sandbox, and the more than 340 million Americans they serve as potential test subjects.”

Tesla’s autopilot technology has been associated with a rapid rise in preventable fatalities, raising serious questions about whether the technology was deployed before adequate safety testing. The rapid deployment of unproven AI systems with life-or-death consequences represents a concerning pattern that appears to prioritize technological short-cuts and false efficiency over rigorous safety protocols to emphasize long term savings.

This divergence is plainly visible in policy moves that have all the hallmarks of loyalists appointed by Trump to gut the government and replace it with incompetence and graft machines. Whereas determining whether a move constitutes risk traditionally required careful human judgment weighing multiple factors to see into the outcomes, the “Catch and Revoke” program reflects a chess player focused solely on a current move and completely blind to what’s ahead. When AI flags a social media post as “appearing pro” anything, that alone can trigger a massive change in civil rights now. This is having real-world consequences, just like Tesla has been killing so many people with no end in sight. Raising alarm about constitutional implications of unregulated AI should be in context of allowing Tesla to continue to operate manslaughter robots on public roads.

The rise in all these AI developments exemplify a radical difference in concepts of integrity and what constitutes a breach, between strategic chess thinking and playing one move at a time.

If we’re entering an era where AI systems—or leaders who operate with similar memoryless, contextless approaches—are increasingly involved in policy implementation, we must find ways to reintroduce institutional memory, historical context, and strategic foresight.

Otherwise, we risk a future where both international relations and domestic governance are reduced to a poorly played game ruled by self-defeating cheaters—as real human lives hang in the balance. The binary questionnaire to UN agencies, the rapid deployment of AI across federal agencies, and the algorithmic policing of social media aren’t just parallel developments—they’re complementary moves in the same dangerous game of governance without memory, context, or foresight.

We’re a decade late on this already. Please recognize the pattern before the game reaches its destructive conclusion. The Cuban missile crisis was a race to a place where nobody is a winner, and we’re not far from the repeating that completely stupid game in taking one selfish and suicidal step at a time.

The book that inspired Dr. Strangelove

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.