Category Archives: History

Firefox Translation Errors: “Snake people in Berlin”

A photograph of defeated Nazis standing in line, begging for handouts, gets a curious translation from the new Firefox client-side engine.

Here’s the original, an image with a caption in German on the Berlin.de “official capital portal”:

Source: Berlin.de

Here’s the FireFox browser translation to English:

“SNAKE PEOPLE”

That can’t be right. The guy in the middle is still sporting the Hitler mustache (like he didn’t get the message), and Nazis were characterized as snakes, but could that alone be enough to trigger such a caption by “learning” technology?

Confusing matters perhaps further is that a East-German politician named Stefan Heym called West Berlin the snake when he said it would get indigestion from eating the hedgehog (East Berlin).

Nach Ansicht des Schriftstellers Stefan Heym wird nach dem Wahlergebnis von der DDR „nichts übrigbleiben als eine Fußnote in der Weltgeschichte”. Heym weiter: „Die Schlange verschluckt den Igel, die Schlange wird Verdauungsschwierigkeiten haben.”

Anyway, let’s break it down. Schlange stehende Menschen

Schlange: literally means snake. Allegedly it comes from an Old High German word “slango”, similar to Yiddish שלאַנג (schlang – penis). On that note, I’m a little disappointed Firefox didn’t translate it to “bunch of dicks”.

Stehende: literally means standing, an adjective formed from the verb “stehen” (to stand).

Somehow the literal German words “standing snake” were turned by the Firefox translation engine into SNAKE PEOPLE.

Now for the really fun part. ChatGPT says that everyone knows the phrase “Schlange stehende” really means standing in line.

You’d think that’s a simple and set answer.

Except, do you think ChatGPT really knows what it’s talking about…ever? See how confident it sounds that everyone knows “schlange stehende” is a common German phrase? So confident that immediately afterwards it contradicts itself, as if trying to win votes by saying anything, just to make you agree with something. Everything it says is a hallucination, always, and usually politically motivated.

ChatGPT is literally arguing that “schlange stehende” is not a valid German phrase. And then it tries to rationalize snakes are unable to stand. Both are laughably useless, proving AI continues to fail at basic life tasks, given that it’s a common well-known phrase in German and of course snakes can be standing around figuratively.

FireFox looked like it made a silly, overly literal mistake. But ChatGPT opens up the possibility that machine learning has a much deeper translation problem.

Consider how ChatGPT will probably never improve itself because fundamentally, like that intellectual theory depicting Berlin as a snake eating a hedgehog, the OpenAI ingestion machine stands at the bottom of a huge mine of complex shifting political and social sands.

Snake people? SNAKE PEOPLE? In 1945 Berlin? That’s saying something well-known but rarely said. To put it simply, Aesop’s simple fables might even be in the corpus being used to translate German.

Nazis are depicted as snakes because they are cruel, sinister authoritarians known for deceitfulness.

In that sense, the historian in me can’t help wondering about a dehumanizing 1945 SNAKE PEOPLE IN BERLIN caption that is… not as silly or innocent in translation as it would seem at first glance.

“We Will Eradicate the Spies and Saboteurs, the Trotskyist-Bukharinist Agents of Fascism.” Sergei Igumnov, 1937

Retail Giant Selling Innacurate Black History Book Exposed by US History Teacher

With everyone working hard to figure out the danger of deep fakes, they really should just hire more historians.

Target’s under fire for selling Black History Month material in their stores with factually incorrect info — namely, grossly mislabeling historic Black Americans.

The ill-timed snafu was pointed out by TikToker @Issatete — who posted a now-viral video of what she says she found in a Target near her. In the clip, she shows off a magnet-style learning activity book called “Civil Rights.”

Black History Month maybe could also become U.S. information integrity breach reporting month. Rewards could be given to those who disclose.

How to Speed Up Military Drone Innovation in America

German news captures 2022 sentiment that Russia is growing weaker by the drone

In a rather superficial analysis featured on War on the Rocks, the discourse on artificial intelligence (AI) reveals a surprising lack of depth. In essence, the crux of the argument suggests that by lowering expectations, particularly in terms of reliability, the concept of “innovation” is reduced to nothing more than pushing a colossal and conveniently uncomplicated “plug and pray” button.

The authors’ apparent reductionist perspective not only fails to grasp the intricacies of AI’s potential in the realm of warfare but also overlooks the nuanced challenges that seasoned military analysts, with decades of combat experience, understand are integral to the successful integration of advanced technologies on the battlefield.

America’s steadfast commitment to safety and security assumes that the United States has the three to five years to build said infrastructure and test and redesign AI-enabled systems. Should the need for these systems arise sooner, which seems increasingly likely, the strategy will need to be adjusted.

When considering America’s commitment to safety and security, a closer examination reveals a steadfast commitment inherently implies less reliance on assumptions. The authors, however, leave a significant void in their arguments by not adequately clarifying their position on this. The closest semblance of an alternative is their proposition of a vague aspirational path labeled as AI “assurance,” positioned between extremes of measured caution and imprudent haste.

…urgently channel leadership, resources, infrastructure, and personnel toward assuring these technologies.

A realist imperative however underscores the dynamic nature of the geopolitical landscape, necessitating a proactive stance rather than a reactive one. Three to five years ahead, is a tangible goal instead of shrinking release cycles to the imprudent “burn toast, scrape faster” mentality. The strategic imperative lies not merely in constructing a sophisticated AI apparatus but also in ensuring resilience and adaptability to the predictable exigencies of future conflict scenarios.

Here are a few instances of downrange events that unequivocally warrant the disqualification of AI innovations, a consideration surprisingly absent in the referenced article:

Source: My presentation on hunting robots, 2023 RSA SF Conference

This War on the Rocks article by a “native Russian speaker”, however, shamelessly bestows excessive praise on Russia for acceleration towards an ill-concieved “automated kill chain” characterized by total disregard for baseline assurances. In doing so, the authors fail to acknowledge the very pivotal point in drone engineering from the battlefield — oppressive Russian corruption and hollow patronage was left behind as Ukraine strongly asserted measured morality and quality control, which has been the true catalyst for Ukraine’s rapid and successful drone innovations (leaving the Russians always only in a clueless catch up mode).

Russia’s reckless pursuit and indiscriminate deployment of AI, as highlighted in the War on the Rocks article, contribute to the mounting evidence of Russian tanks and troops being grossly outmatched by adversaries who prioritize fundamental training and employ sophisticated countermeasures.

An overwhelming desire for switching into the “at any cost” haste of catch up mode lacking any morality is of little benefit when it brings about overwhelming technical debt and self-destructive consequences.

Remarkably, the authors neglected to provide an explanation for their omission of Ukrainian strides in “small, relatively inexpensive consumer and custom-built drones” as an integral aspect of American military strategy of effective targeting. Equally puzzling is their apparent belief that innovation ceases when others replicate it.

Taking a broader perspective, the American military ethos, characterized by augmentation for skilled professionals in tanks, has demonstrably outshone Russia’s reliance on over-automation guided by disposable conscripts stupidly killing themselves even faster than their enemy can. Despite Russia’s boastful rhetoric, their inability to distinguish between effective and ineffective strategies echoes historical patterns familiar to statisticians of World War II examining the Nazi lack of technological prowess.

AI, far from being an exception to historical trends, appears to be a recurrence of unfavorable chapters. Reflect on a crossbow, longbow, repeating rifle, or even Churchill’s “water” tanks (e.g. how America ended up mass-producing Britain’s innovations)… and the trajectory becomes evident. Throughout history, advancements in genuine measures of safety and security (weapon assurance as a practical measure of safety and security) have defined battlefields for centuries.

Abraham Lincoln famously urged the prudent use of time to sharpen an axe before felling a tree, a maxim applicable to any technology. The historical narrative strongly indicates that AI, as a technological frontier, will only serve to underscore the enduring wisdom encapsulated in the words of the President who delivered an unconditional victory in America’s Civil War.

You Can Get Answers Only to Questions You Think to Ask

This phrase seems like the buried lede in an economic analysis of market discontent.

No single poll is definitive, and you can get answers only to questions you think to ask.

It’s a philosophical point, which for me invokes the wonderful empiricist insights of the late 1700s.

The path towards knowing the right questions to ask can be derived from one of the most famous philosophers in history, Mary Wollstonecraft. A prominent figure in the early suffrage and abolition movements (rights for women and non-whites) she wrote “A Vindication of the Rights of Woman” in 1792, which argued for social and political equality (power) through the process of learning (education).

In the modern context of AI hacking, Wollstonecraft’s emphasis on societal responsibility of knowledge is very pertinent. She famously stated “I do not wish [women] to have power over men, but over themselves”, which should be required training for every hacker. Her philosophy remains powerfully useful, providing us with the idea that “learning” via questions is a known disruptive power dynamic in society that must be guided through ethical considerations.

My favorite way of thinking about it (as a long time physical security auditor, let alone encryption key breaker) is no single lock is definitive, but every prior lock is illustrative. Thus you get past locked doors by thinking about enough of the ways to get in.