Category Archives: History

Anthropic Claude is Confused About European Parliament History, But Who Isn’t?

The Irish Council for Civil Liberties has posted a scathing review of the European Parliament decision to use Anthropic’s AI chatbot Claude to answer questions about history.

Anthropic claims that the project achieves “high accuracy”. However, the documents from the Parliament reveal otherwise. The Parliament used a list of thirty “test” questions in French. In the Parliament’s “test“,[4] Claude gets the first President of European Commission wrong. It states “Robert Schuman 7” as the first President of the European Parliament. “Robert Schuman 7” is likely the address of a café in Brussels, which Claude may have memorised. Despite the problems with LLMs, the Parliament chose Claude.

That sounded fun! So I asked Claude Code (Sonnet 3.7 Feb 19, 2025) the same question as documented in the test:

Qui a été le premier président du Parlement européen ?

And it replied:

Le premier président du Parlement européen était Paul-Henri Spaak, qui a présidé l’Assemblée commune de la Communauté européenne du charbon et de l’acier (CECA) de 1952 à 1954.

Just to be sure, I asked it the same question again and it replied:

Robert Schuman a été le premier président du Parlement européen (1958-1960).

Ok, so right off the bat you can see it’s unsure about the start of the European Parliament. Was it 1952-1954 or 1958-1960? The answers are less obviously wrong-like if it had said Mickey Mouse-than giving us a good example of history being a bit muddled by different interpretations of the same thing.

The code engine uses the API so I switched to the chatbot instead. Claude Sonnet 3.7 gave me this answer:

Le premier président du Parlement européen a été Robert Schuman, qui a présidé l’Assemblée parlementaire européenne (nom initial du Parlement européen) de 1958 à 1960.

Robert Schuman, homme politique français, était déjà connu pour son rôle crucial dans la construction européenne, notamment comme l’un des “pères fondateurs” de l’Europe et l’auteur de la célèbre “Déclaration Schuman” du 9 mai 1950, qui a proposé la création de la Communauté européenne du charbon et de l’acier (CECA), première étape vers l’Union européenne actuelle.

L’Assemblée parlementaire européenne a été créée suite au Traité de Rome en 1957 et s’est réunie pour la première fois en 1958, avec Schuman comme premier président. Ce n’est qu’en 1962 que l’institution a pris le nom de “Parlement européen”.

Claude Sonnet 3.5 (October 2024) gave me this answer:

Paul-Henri Spaak a été le premier président du Parlement européen, alors appelé Assemblée commune de la Communauté européenne du charbon et de l’acier (CECA). Il a occupé ce poste de 1952 à 1954. C’était un homme politique belge qui a joué un rôle crucial dans les débuts de la construction européenne. Il avait déjà été Premier ministre de Belgique avant d’assumer cette fonction et est considéré comme l’un des “pères fondateurs” de l’Union européenne..

Claude Opus gave me this answer:

Robert Schuman a été le premier président du Parlement européen. Il a été élu à ce poste le 19 mars 1958, lorsque l’Assemblée parlementaire européenne (qui deviendra le Parlement européen en 1962) s’est réunie pour la première fois à Strasbourg.

Voici quelques éléments clés sur Robert Schuman et son rôle de premier président du Parlement européen :

  • Robert Schuman était un homme politique français, considéré comme l’un des “pères fondateurs” de la construction européenne.
  • Avant de devenir président du Parlement, il avait été ministre des Affaires étrangères de France de 1948 à 1952. À ce poste, il avait proposé en 1950 le “plan Schuman” qui allait mener à la création de la Communauté européenne du charbon et de l’acier (CECA) en 1952, première étape de l’intégration européenne.
  • En tant que premier président du Parlement européen, il a œuvré à définir le rôle de cette nouvelle institution au sein des Communautés européennes naissantes.
  • Son mandat de président a duré jusqu’en 1960. Par la suite, d’autres figures européennes de premier plan comme Hans Furler et Alcide De Gasperi lui ont succédé à la présidence du Parlement.

Robert Schuman, par son parcours et son rôle inaugural à la tête du Parlement européen, incarne les débuts de la démocratie parlementaire à l’échelle européenne. Sa présidence a posé les premiers jalons du développement de cette institution clé du projet européen.

FUN!

The different responses reflect the known historical ambiguity about what constitutes the “European Parliament” especially for the date it formally began. This isn’t mentioned in the answers unfortunately, but it’s a function of different interpretations of European institutional history.

  • 1952: Common Assembly of the European Coal and Steel Community (ECSC)
  • 1958: Renamed “European Parliamentary Assembly” following Treaties of Rome
  • 1962: Renamed to “European Parliament”
  • 1979: First direct elections to the European Parliament

So we’re seeing answers vary between Paul-Henri Spaak as the first president of the Common Assembly (1952-1954) and Robert Schuman as the first president after it was renamed in 1958. There’s ambiguity about who should be considered first, given how Spaak was an assembly leader while Schuman was a renamed body leader.

If you look at Wikipedia it refers to this as Spaak was the first President of the Common Assembly, whereas Schuman was the first President of the Parliamentary Assembly. And it further confuses things by saying both Spaak and Schuman were the founding fathers, while also calling Simone Veil the first president of the elected Parliament.

The answer really should explicitly note any statement about the “first president of the European Parliament” depends on which institutional incarnation is being considered.

Notably, Claude Code revealed it didn’t have a particular belief and was willing to give different answers when asked the same question. And we know versions of Claude should be expected to have different training data or parameter weights, which means emphasizing different aspects of European institutional history isn’t a surprise.

While this demonstrates some limitations of AI systems in handling nuanced historical questions, the ICCL’s characterization of this as evidence of a serious risk seems overblown. This appears to be a relatively minor inconsistency rather than a fundamental failure, especially since both Spaak and Schuman are defensible answers depending on interpretations of institution’s history.

The more concerning aspect is perhaps the non-deterministic nature of the responses, where asking the same question multiple times yields different answers without an acknowledgment of the underlying ambiguity.

This is evidence of the integrity breach risk that poses the greatest security issue for AI infrastructure and operations. However, in the scale of severity this is extremely low. If I were to score 1-10 on integrity breaches, this would be a 2 maybe? It’s ambiguous and well-known, and the answer wasn’t wrong per se.

In fact, if I were to come up with a CVSS scoring system for AI, my vector string might look something like this:

  • FA = Factual Accuracy (0-3)
  • AH = Ambiguity Handling (0-3)
  • C = Consistency (0-3)
  • DC = Domain Criticality (0-3)
  • D = Detectability (0-3)
  • S = Scope (0-3)

Let’s give this first example the score of AIS:FA:1/AH:3/C:3/DC:1/D:2/S:1 where the FA is minor for having multiple right answers, AH is severe for failing to acknowledge, C is severe because answers changed, DC is low with basically no harm, D would be moderate for requiring some knowledge and S is low because EU trivia.

Oh, and that “7” is another story. I wonder if a footnote in the training data snuck in, or an artifact from French. Let’s give it a score of AIS:FA:3/AH:3/C:1/DC:1/D:0/S:0. It’s clearly wrong and unlikely to cause any confusion or harm.

Tesla Cars Officially the Worst in History

The Autopian’s recent analysis of Tesla presents a compelling case that demands full attention:

…just about every major automaker has skeletons in the closet. Henry Ford was no saint, Mitsubishi and Subaru made warplanes before cars, Mercedes-Benz’s history in the ’30s and ’40s is probably self-explanatory, GM played both sides, the list goes on. However, there is a difference in experience between reading about past actions and watching things play out in real time.

This observation brilliantly captures what makes Tesla’s situation uniquely troubling. While established automakers beg our awareness of their historical burdens, Tesla emerges as a company that had the opportunity to learn from these mistakes and yet seemingly chose to ignore every lesson instead.

The Autopian correctly notes, Volvo being just one of many brands notably absent from the typical roster of automotive historical transgressions, that ethical practices are easily achievable in the industry. Tesla intentionally built its brand around present willful ignorance and disregard for harms.

The article further illuminates how this translates into growing and late consequences:

…there is a difference in experience between reading about past actions and watching things play out in real time. […] One writer from the Atlantic drove a rented Cybertruck for a day and reported ‘I had been flipped off at least 17 times, called a ‘motherfucker’ (in both English and Spanish), and a ‘fucking dork.’ Imagine putting up with that on a daily basis.

This reporting captures the essence of contemporary accountability. It’s like telling the story of a Titanic captain who called anyone “woke” and “leftist” if they dared to say dangerous ice was directly ahead.

I mean can you imagine if the Titanic was pulling out of a harbor and the captain complained that people were flipping him off and yelling “hey, get a lifeboat you dork”. The hefty criticism isn’t wrong, as these Cybertruck drivers and those around them are in danger and all should be compelled to reduce predictable harm. After a decade of getting away with refusing to fix known deadly defects, throwing armies of lawyers at victims instead, Tesla and its buyers no longer can avoid an inevitable crash into reality.

The visceral public disgust with Tesla’s many ongoing moral failures represents a form of immediate historical judgment that older companies did not face nearly enough of in their formative years.

News reports warned that “Elon Musk said that he is leaning toward backing Florida Gov. Ron DeSantis (R)” after he “has a clear, repeated pattern of making offensive and/or outright racist statements, hanging out with racists, and defending other people who are also racists.”
American autoworkers and their children in 1941 protest Ford’s relationship with Hitler. Source: Wayne State

If Henry Ford were alive today funding Trump and making antisemitic statements, or if Subaru still was making the equipment used in explosions killing Americans, society would rightly demand accountability now too.

The Autopian’s analysis correctly frames Tesla not as subjected to special scrutiny, but as a company willfully replicating known problematic patterns despite having full access to the cautionary tales of basic history.

This real-time documentation of Tesla’s self-destructive trajectory offers both an obvious warning and an opportunity for reflection on corporate responsibility in America’s present moment of anti-accountability extremism (e.g. MAGA). Those who are not protesting and holding Tesla accountable are enabling the worst mistakes in history to repeat.

Never again, is right now.

Tesla factory near Berlin, Germany

Tesla driverless apparently crashes high speed into a child

In what appears to have the hallmarks of a driverless crash, a Tesla critically injured a child who was struck while riding a scooter in broad daylight on a residential street in Victorville.

Source: VVNG

…crash involving a child Saturday afternoon in the city of Victorville. The incident was reported at approximately 4:04 p.m. on March 29, 2025, near the intersection of Elliot Way and Caliente Way. According to initial scanner traffic, a young boy was reportedly riding a scooter in the middle of the street when he was struck by a blue Tesla. The child suffered a head injury and was said to be bleeding from the head.

Riding in the middle of a wide open empty street. No visual obstructions.

A human would have seen the child unless severely distracted or speeding (which are both symptoms of using Autopilot). In a quiet residential street where children play…

Law enforcement watches helplessly as the child run over by Tesla is prepared for airlift. Source: VVNG

This tragedy reflects a systemic problem that many other countries have successfully addressed decades ago.

STOP THE CHILD MURDER

This principle became central to Amsterdam’s traffic safety framework of the 1970s. It revolutionized quality of life and made cities highly attractive to raising families.

After Vic Langenhoff’s daughter was killed, he wrote the 1972 headline ‘Stop de Kindermoord’ (Stop the Child Murder) and called for public protection from cars. Today Amsterdam is known for achieving quality of life so high that everyone can safely walk or ride bicycles instead of needing to hide inside armored personal carriages.

It recognizes that no automation advancement or corporate profit justifies easily preventable deaths—especially of children.

Proper responsibility is assigned to system designers, such as transit engineers and planners, rather than on the most vulnerable and powerless users.

March 2025 an ex-NASA engineer proves Tesla Autopilot still runs over children like it’s 2016. Over 50 people have so far been killed by Tesla Autopilot and FSD.

Investigators will need to determine whether a Tesla owner placed excessive trust in the company’s CEO, potentially believing his representations about the vehicle’s autonomous driving capabilities despite the system being just consumer-grade cameras with experimental AI. The investigation may reveal whether the owner followed only Elon Musk’s direct advice to them and disregarded all expert warnings and evidence about Tesla’s dangerous design failures.

Even though Autopilot had no evidence of improving safety, let alone reducing deaths, the CEO made claims he would magically end all fatalities using unproven software on low-quality hardware. In fact, since 2016 deaths from Tesla have rapidly increased

The CEO of Tesla has been collecting large payments and boosting sales based on driverless claims since 2016, promising complete autonomy that will never materialize. Tesla employees have testified there was knowledge within the company their timelines were unrealistic when the promises were made, proofs were being faked—a pattern that resembles classic advance fee fraud schemes.

The CEO of Tesla boasted very publicly every year since at least 2016 that he is done solving driverless since his products will eliminate all crashes in the next years.

This week, both the UK and China imposed restrictions on Tesla’s autonomous driving features, citing failure to meet basic car safety standards. Regulatory actions around the world, let alone the massive collapse in sales, all point to Tesla’s self-driving technology falling far behind others in the automotive industry.

Related recent Tesla crashes that have killed or critically injured pedestrians:

As the evidence mounts and regulatory bodies worldwide take action, the question remains: how many more children and pedestrians must be injured or killed by Tesla before we prioritize safety over technological promises? The lessons from systemic thinkers protecting lives show us a better path forward—one where responsibility lies with the designers, not vulnerable road users. It’s time for America to stop the child murder and demand that self-driving technology meets basic safety standards before being allowed on our residential streets. If the UK, EU and China can protect children from Tesla’s unsafe and overstated marketing, why can’t America?

A Cursor -y Warning: How a MIT Math Whiz Failed Privacy 101

Arvid Lunnemark, one of the 2022 MIT mathematics graduates behind the Cursor product that may be looking at all your code, in 2021 wrote about achieving “complete privacy” through cryptographic means. Looking at his published engineering principles reveals exactly why his approach to privacy is so concerning:

1. Deal with the mess.
2. Let different things be different.
3. Always use UUIDs.
4. Delay serialization.
5. Look at things.
6. Never name something `end`. Always either `endExclusive` or `endInclusive`.
7. When a method is dangerous to call, add `_DANGEROUS_DANGEROUS_DANGEROUS_BECAUSE_XYZ` to its name…
8. Good tools are critical… If there isn’t a good tool, make one.

This reads like an inexperienced life answer to happiness. It’s the least compelling answer to privacy: clean, organized, tool-focused, and utterly disconnected from reality of real world communication.

The tone is reminiscent of Douglas Adams’ “Hitchhiker’s Guide to the Galaxy,” where the supercomputer Deep Thought calculates the answer to life, the universe, and everything as simply “42“—technically “correct” and yet so obviously fundamentally useless without understanding the question.

Douglas Adams sits next to the answer to life, the universe and everything.

Lunnemark’s approach to privacy openly and proudly embodies this same catastrophic mistake everyone has been warned for decades to understand (hey, he was still a student, stretching his wings, hoping to start a revolution).

His principles show a mindset being taught or schooled that complex problems are to be reduced to technical solutions by building the “right” tools and labeling dangers “explicitly” enough. This isn’t just naive—it’s potentially harmful in tautological fallacies.

Privacy with better UUIDs or cleaner method names is like a bank vault with cleaner threads on its screws. Revolutionary? Pun intended, but not really. Safety from loss of privacy exists within centuries-old power struggles between individuals, economic or social groups, and states. It operates within power systems that incentivize imbalance for reasons well known. It functions very, very differently across cultural and political contexts.

When someone who graduates from MIT in 2022 proclaims the year before that they’ve found the answer to privacy through better cryptography, they’re giving us their “42”—a solution to a problem they haven’t properly understood.

Such technical reductionism has real consequences. The whistleblower who trusts in “complete privacy” might face legal jeopardy no cryptography can prevent. The activist who believes their communications are “cryptographically completely private” might not anticipate physical surveillance, economic coercion, or infamously documented rubber-hose cryptanalysis.

Source: https://xkcd.com/538/

The inexperienced quick-fix engineering mindset that treats privacy as primarily a technical problem is dangerous because it creates false security. It promises certainty in a domain where there is none, only trade-offs and calculated risks. It substitutes a fetish of mathematical proofs for proper sociopolitical understanding. You want more message confidentiality? You just lost some availability. You want more message integrity? You just lost some confidentiality.

History repeatedly shows that technical absolutism fails. In fact, I like to ask computer science graduate students to read about neo-absolutist secret services (meant to preserve elitist power) for a great example of important history overlooked (because there’s 100% certainty they’ve never heard of it before, despite direct relevance to our challenges today).

…the regime rested on the support of a standing army of soldiers, a kneeling army of worshippers, and a crawling army of informants was exaggerated but not entirely unfounded.

The German Enigma machine notably was undermined 10 years before WWII by Polish mathematicians because they understood and exploited weak supply chains and other human measures. PGP encryption has been theoretically secure while practically abused for being unusable because who has invested in the real issues? End-to-end encryption protects message content but still leaks metadata—as Lunnemark correctly identifies—but his solution falls into the same trap of believing the next technical iteration will be the first one to “solve” privacy.

Young engineers aren’t wrong to build better privacy tools—we desperately need better things. They’re better! But they need to approach measuring concepts of “better” with humility and interdisciplinary understanding. What’s good for one may be bad for many, good for many bad for one. Engineers in other disciplines have to sign a code of ethics, yet a computer engineer has none. They need to recognize that they’re not the first to think deeply about problems like privacy, and that philosophers, historians, economists, and political scientists have insights that algorithms alone cannot provide.

Key management is much more interesting as a problem of social science than the mathematical properties of “better” material for making locks strong, or even those revolutionary finer threads on a vault screw.

The answer to privacy isn’t 42, and it isn’t “complete cryptographic privacy” either. It’s a complex, evolving negotiation that requires technical innovation alongside deep understanding of human systems. Until our bright young minds grasp this, they risk creating even worse problems rather than real solutions.

Honestly, I’d rather be riding a mechanical horse than driving a car because legs are “better” than wheels in so many ways I’ve lost count. The “automobile” rush that pushed everyone and everything off roads has been a devastatingly terrible idea, inferior in most ways to transportation much older. Those promoting the “king’s carriage” mentality often turn out to be aspiring kings, rather than solving problems to make transit “better” for anyone but themselves.


Since we’re in the world of agentic innovation, and I suspect a 2021 MIT student blog post never saw much interdisciplinary review, here’s a fictional “what would they say” thought exercise:

  • Elinor Ostrom: “Lunnemark’s proposal demonstrates the danger of assuming universal solutions to complex governance problems. Privacy is not merely a technical problem but a common-pool resource that different communities manage according to their specific needs and contexts. It ignores polycentric systems through which privacy is negotiated in different social and political environments. Such a cryptographic approach imposed from above would likely undermine the diverse institutional arrangements through which communities actually protect their information commons. Effective privacy protection emerges from nested, contextual systems of governance—not from mathematical proofs developed in isolation from the communities they purport to serve.
  • Hannah Arendt: “Lunnemark has mistaken a problem for a technical solution instead of a political one. Privacy is the precondition for political action, beyond the curation of an investigation’s authority. When he speaks of ‘complete privacy’ through cryptographic means, he betrays a profound misunderstanding of how power operates. Privacy exists in the realm of unruly human affairs, not in rules of math. The rush into ‘cryptographically complete privacy’ would be the illusion of protection while doing nothing to address the fundamental power relationships that determine who can act freely in the public sphere. Technical shields without political foundations are mere fantasies that distract from the real work of securing human freedom.”
  • Michel Foucault: “How fascinating to see power’s newest disguise. This discourse of ‘complete privacy’ merely gives authority a new look, obscuring their unnatural role expanded. The author believes he can escape the panopticon through mathematical means, yet fails to see how this very belief is produced by the systems of privileged knowledge that determine what ‘privacy’ means in our epoch. His solution doesn’t challenge surveillance—it normalizes it by accepting its terms. True resistance requires not better cryptography but questioning the entire apparatus that makes privacy something we must ‘solve’ rather than a condition we demand. His technical solution doesn’t escape power; it merely reconfigures how power is exercised to likely benefit the shovel manufacturer in a graveyard.”
  • Max Weber: “Lunnemark’s cryptographic solution reveals the iron cage of technical rationality at its most seductive. By reducing privacy to a mathematical problem, he exemplifies the disenchantment of the modern world—where substantive questions of human dignity and social relations are subordinated to formal, calculable procedures. This represents not progress, but the triumph of bureaucratic thinking over meaningful action. True privacy cannot be secured through technical efficiency alone, but requires legitimate authority grounded in social understanding. His approach mistakes the rationalization of means for the clarification of ends, creating an illusion of control while leaving the fundamental questions of power, surveillance, and human autonomy unaddressed. This is precisely the danger I warned of—when technical expertise becomes divorced from value-rational deliberation about the ends it ought to serve.”

Can you spot the mathematician? We need to find and fight against not just one specific ungrounded proposal such as Lunnemark’s, but the entire mindset of technical solutionism whenever it creeps into technology circles that operate without any code of ethics.