Category Archives: Poetry

Die große Trümmerfrau spricht

A poem from the book Gleisdreieck (1960) by Günter Gras, speaks to the peculiar state of mythical German women tasked with clearing the rubble of WWII.

Gnade Gnade.
Die große Trümmerfrau…

Amen Amen.
Hingestreut liegt Berlin.
Staub fliegt auf,
dann wieder Flaute.
Die große
Trümmerfrau wird heiliggesprochen

That last line translates roughly as “Rubble woman is canonized”.

The “canonized” angle of the Trümmerfrau is interesting because they actually were a tiny and insignificant group of reluctant volunteers.

The historian concedes that, of course, the builders needed help – after all, about 400 million cubic meters of rubble and ruins were waiting to be cleared across the nation. “But women played a minor role in clearing German cities from the rubble,” Treber says. Berlin mobilized about 60,000 women to clear the war ruins, but even that amounted to no more than 5 percent of the female population – it wasn’t a mass phenomenon. In the British Sector, Treber says, only 0.3 percent of the women joined in the hard work. Yet it wasn’t just the women who were reserved when it came to clearing the war debris; men weren’t crazy about the task, either. In the eyes of the Germans, it was anything but honorable for people to show their “willingness to rebuild.” In fact, most Germans regarded clearing rubble as punishment – and for a reason. During the war, the Nazis made soldiers, the Hitler Youth, forced laborers, prisoners of war and concentration camp prisoners clear the bombed cities after Allied air raids.

This checks out when you read American military history of occupation after the war. In fact, while a number like 60,000 sounds large, first-person accounts explain what they actually worked on in terms of an entire country reduced to rubble.

“We had 20,000 (people) per shift and we worked 24 hours a day with lights, generator sets — so there were 60,000 people,” Delbridge said. “We had more women than men that did all of the earth moving… and they moved the earth by hand.” In all, records from the U.S. Army Corps of Engineers, Office of History estimate that more than 9.8 million work hours went into the [Tempelhof Airstrip] effort between military personnel and local Germans. Local Germans – mostly women according to Delbridge – accounted for the vast majority of that figure (more than 9.6 million work hours).

Thus it was 60,000 people, mostly women, who had cleared and built one airstrip. Undoubtedly an important project, yet that was just one airstrip.

For another simple number check, during WWII the Nazis deported 75,000 people into Leipzig to do forced labor including punishing rubble removal as “Ostarbeiter”.

Soviet prisoners of war removing rubble in the centre of Leipzig (Leipzig city archive)

The mostly forgotten “Ostarbeiter”, despite numbering far more than the Trümmerfrau, was in addition to the large slave labor supply out of Buchenwald concentration camps.

Here’s another typical image courtesy of Hamburg almost completely erased by the “big” Trümmerfrau story.

Concentration camp prisoners, many from satellite camps of Neuengamme, remove corpses of German civilians after Allied bombings of Hamburg. Germany, August 1943. Source: Holocaust Encyclopedia

Thus, a subset of 60,000 people clearing all of Berlin seems to NOT add up. It is dwarfed by the bigger picture of who removed rubble and when. One important airstrip indeed could be credited to tens of thousands of Trümmerfrau by the U.S. military, but what does that really represent about Berlin’s reconstruction?

…in a voluntary recruitment drive in Duisburg in the West German industrial Ruhr-area in December 1945, 10,550 men volunteered — and 50 women. Such evidence suggests that when they were not compelled to do so, German women did not volunteer in great numbers. […] The divided city of Berlin was a special case. Here, large numbers of women did clear rubble — about 26,000 women in total, and the term Trümmerfrauen originated in West Berlin. This large number was due to the fact that women far outnumbered men in Berlin — in the age group 20–39, there were 250,000 men and 500,000 women in Berlin in 1947.

Perhaps the thing that rings most hollow is how the German narrative tried to frame Nazi women after WWII as suffering hard labor, at the very same time that concentration camps were being fully investigated.

As a young woman who had grown up almost exclusively under the Third Reich, Frau Naß admits the end of the war threw all her beliefs into question: “We were totally disillusioned, because as girls we had gone through the Hitler Youth,” she says. “You have to imagine how you would react if the whole system you had been brought up in simply didn’t exist anymore. People just couldn’t grasp it.”

The lack of slaves?

Hard work really hit the Nazi girls hard, I guess, when they realized they couldn’t expect Hitler’s promises of slavery to work for them anymore. Their dream of easy living through slavery wasn’t easy to let go of apparently, and some say we should appreciate them more for it.

The suffering of these women isn’t even appreciated.

Here’s a good description of what is meant when “the whole system you had been brought up in simply didn’t exist anymore“:

Not only were the women not volunteering to help in the rebuilding, the men weren’t signing up either. It was not seen as honorable to help rebuild. In fact, it was considered punishment. The reason for that lies in the fact that the Nazi party forced soldiers, Hitler Youth, prisoners of war and concentration camp prisoners to clean up the rubble in Berlin during the war. After the war, the authorities began using prisoners of war and former members of the Nazi party. Only when progress was insufficient using those forced laborers did the country turn to the general population for help. In the West, the help was voluntary…. Berlin encouraged participation by making the second-highest category of food ration cards available to the Trümmerfrauen. They showed images of smiling women cheerfully lugging stones and bricks. The image was repeated so many times, it is ingrained in the German collective consciousness.

A small group of reluctant volunteer women, only showing up for highly valuable ration cards, seems to be what became an ingrained German propaganda image of willing hard workers. Was it meant to be a subtle nod back to arbeit macht frei?

Some have started to study whether such propaganda was a calculated effort by Nazis after they surrendered to coldly erase the memory of those who had suffered actual hard labor under their tyranny. A strange irony is emerging. Clearing rubble was punishment to be avoided by German women, until “canonization” for hard work was on the table and then suddenly it was appropriated by them as a symbol of pride.

The focus of this research project is to investigate the Austrian “Trümmerfrauen”-myth as the idea that the removal of debris after World War II in Vienna was mainly done by voluntary female workers. To this end, previously unprocessed holdings of the Wiener Stadt und Landesarchiv will be systematically recorded and analyzed for the first time. From these holdings it becomes clear that the work in Vienna was primarily done by former National Socialists who were compelled by law to work. …this expiatory work by former NSDAP members could give rise to the Austrian “Trümmerfrauen”-myth decades later.

A Trümmerfrau at work. Source: Gleisdreieck by Günter Gras

Why Open-Source AI is Faster, Safer and More Intelligent than Google or OpenAI

A “moat” historically meant a physical method to reduce threats to a group intended to fit inside it. Take for example the large Buhen fortress on the banks of the Nile. Built by Pharaoh Senwosret III around 1860 BCE, it boasted a high-tech ten meter high wall next to a three meter deep moat to protect his markets against Nubians who were brave enough to fight against occupation and exploitation.

Hieroglyphics roughly translated: “Just so you know, past this point sits the guy who killed your men, enslaved your women and children, burnt your crops and poisoned your wells. Still coming?”

Egyptian Boundary Stele of Senwosret III, ca. 1878-1840 B.C., Middle Kingdom. Quartzite; H. 160 cm; W. 96 cm. On loan to Metropolitan Museum of Art, New York (MK.005). http://www.metmuseum.org/Collections/search-the-collections/591230

Complicated, I suppose, since being safe inside such a moat meant protection against threats, yet being outside was defined as being a threat.

Go inside and lose freedom, go outside and lose even more? Sounds like Facebook’s profit model can be traced back to stone tablets.

Anyway, in true Silicon Valley fashion of ignoring complex human science, technology companies have been expecting to survive an inherent inability to scale by relying on building primitive “moats” to prevent groups inside from escaping to more freedom.

Basically moats used to be defined as physically protecting markets from raids, and lately have been redefined as protecting online raiders from markets. “Digital moats” are framed for investors as a means to undermine market safety — profit from users enticed inside who then are denied any real option to exit outside.

Unregulated highly-centralized proprietary technology brands have modeled themselves as a rise of unrepresentative digital Pharoahs who are shamelessly attempting new forms of indentured servitude despite everything in history saying BAD IDEA, VERY BAD.

Now for some breaking news:

Google has been exposed by an alleged internal panic memo about profitability of future servitude, admitting “We Have No Moat, And Neither Does OpenAI

While we’ve been squabbling, a third faction has been quietly eating our lunch. I’m talking, of course, about open source. Plainly put, they are lapping us. Things we consider “major open problems” are solved and in people’s hands today. […] Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months.

One week! Stunning pace of improvement. https://lmsys.org/blog/2023-03-30-vicuna/

It’s absolutely clear this worry and fret from Google insiders comes down to several key issues. The following paragraph in particular caught my attention since it feels like I’ve been harping about this for at least a decade already:

Data quality scales better than data size
Many of these projects are saving time by training on small, highly curated datasets. This suggests there is some flexibility in data scaling laws. The existence of such datasets follows from the line of thinking in Data Doesn’t Do What You Think, and they are rapidly becoming the standard way to do training outside Google.

There has to be common sense about this. Anyone who thinks about thinking (let alone writing code) knows a minor change is more sustainable for scale than complete restarts. The final analysis is that learning improvements grow bigger faster and better through fine-tuning/stacking on low-cost consumer machines instead of completely rebuilding upon each change using giant industrial engines.

…the model can be cheaply kept up to date, without ever having to pay the cost of a full run.

You can scale a market of ideas better through a system designed for distributed linked knowledge with safety mechanisms, rather than planning to build a new castle wall every time a stall is changed or new one opened.

Building centralized “data lakes” was a hot profit ticket in 2012 that blew-up spectacularly just a few years later. I don’t think people realized social science theory like “fog of war” had told them not to do it, but they definitely should have walked away from “largest” thinking right then.

Instead?

OpenAI was born in 2015 on the sunset phase of a wrong model mindset. Fun fact: I once was approached and asked to be CISO for OpenAI. Guess why I immediately refused and instead went to work on massively distributed high-integrity models of data for AI (e.g. W3C Solid)?

…maintaining some of the largest models on the planet actually puts us at a disadvantage.

Yup. Basically confidentiality failures that California breach law SB1386 hinted at way back in 2003, let alone more recent attempts to stop integrity failures.

Tech giants have vowed many times to combat propaganda around elections, fake news about the COVID-19 vaccines, pornography and child exploitation, and hateful messaging targeting ethnic groups. But they have been unsuccessful, research and news events show.

Bad Pharaohs.

Can’t trust them, as the philosopher David Hume sagely warned in the 1700s.

To me the Google memo reads as if pulled out of a dusty folder: an old IBM fret that open communities running on Sun Microsystems (get it? a MICRO system) using wide-area networks to keep knowledge cheaply up to date… will be a problem for mainframe profitability that depends on monopoly-like exit barriers.

Exiting times, in other words, to be working with open source and standards to set people free. Oops, meant to say exciting times.

Same as it’s ever been.

There is often an assumption that operations should be large and centralized in order to be scalable, even though such thinking is provably backwards.

I suspect many try to justify such centralization due to cognitive bias, not to mention hedging benefits away from a community and into a just small number of hands.

People sooth fears through promotion of competition-driven reductions; simple “quick win” models (primarily helping themselves) are hitched to a stated need for defense, without transparency. They don’t expend effort on wiser, longer-term yet sustainable efforts of more interoperable, diverse and complex models that could account for wider benefits.

The latter models actually scale, while the former models give an impression of scale until they can’t.

What the former models do when struggling to scale is something perhaps right out of ancient history. Like what happens when a market outgrows the slowly-built stone walls of a “protective” monarchist’s control.

Pharoahs are history for a reason.

The AI Trust Problem Isn’t Fakes. The AI Trust Problem Is Fakes.

See what I did there? It worries me that too many people are forgetting almost nobody really has been able to tell what is true since… forever. I gave a great example of this in 2020: Abraham Lincoln.

A print of abolitionist U.S. President Abraham Lincoln was in fact a composite, a fake. Thomas Hicks had placed Lincoln’s unmistakable head on the distinguishable body of Andrew Jackson’s rabidly pro-slavery Vice President John Calhoun. A very intentionally political act.

The fakery went quietly along until Stefan Lorant, art director for London Picture Post magazine, noticed a very obvious key to unlock Hick’s puzzle — Lincoln’s mole was on the wrong side of his face.

Here’s a story about Gary Marcus, a renowned AI expert, basically ignoring all the context in the world:

His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”

Will be flooded?

That is literally what the Internet has done since its origin. The printing press flooded the average person. The radio flooded the average person. It’s not like the Internet, in true reinvention of the wheel fashion, grew in popularity because it came with an inherent truth filter.

The opposite, bad packets were always there for bad actors and — despite a huge amount of money invested for decades into “defenses” — many bad packets continue to flow.

Markets always are dangerous, deceptive places if left without systems of trust formed with morality (as philosopher David Hume explained rather clearly in the 1700s, perhaps too clearly given the church then chastised him for being a thinker/non-believer).

“Magritte’s original title of this painting was L’usage de la parole I (The Use of Speech I), which implies that we should question the veracity of the words, not the image.” Source: BBC

So where does our confidence and ability to move forward stem from? Starting a garden (pun not intended) requires a plan to assess and address, curate if you will, risky growth. We put speed limiters into use to ensure pivots, such as making a turn or changing lanes, won’t be terminal.

Historians, let alone philosophers and anthropologists, might even argue that having troubles with the truth has been the human condition across all communication methods for centuries if not longer. Does my “professor of the year” blog post from 2008 or the social construction of reality ring any bells?

Americans really should know exactly what to do, given it has such a long history of regulating speech with lots of censorship; from the awful gag rule to preserve slavery, or banning Black Americans from viewing Birth of a Nation, all the way to the cleverly named WWII Office of Censorship.

What’s that? You’ve never heard of the U.S. Office of Censorship, or read its important report from 1945 saying Americans are far better off because of their work?

This is me unsurprised. Let me put it another way. Study history when you want to curate a better future, especially if you want to regulate AI.

Not only study history to understand the true source of troubles brewing now, growing worse by the day… but also to learn where and how to find truth in an ocean of lies generated by flagrant charlatans (e.g. Tesla wouldn’t exist without fraud, as I presented in 2016).

If more people studied history for more time we could worry less about the general public having skills in finding truth. Elon Musk probably would be in jail. Sadly the number of people getting history degrees has been in decline, while the number of people killed by a Tesla skyrockets. Already 19 dead from Elon Musk spreading flagrant untruths about AI. See the problem?

The average person doesn’t know what is true, but they know who they trust; a resulting power distribution is known by them almost instinctively. They follow some path of ascertaining truism through family, groups, associations, “celebrity” etc. that provide them a sense of safety even when truth is absent. And few (in America especially) are encouraged to steep themselves in the kinds of thinking that break away from easy, routine and minimal judgment contexts.

Just one example of historians at work is a new book about finding truth in the average person’s sea of lies, called Myth America. It was sent to me by very prominent historians talking about how little everyone really knows right now, exposing truths against some very popular American falsehoods.

This book is great.

Yet who will have the time and mindset to read it calmly and ponder the future deeply when they’re just trying to earn enough to feed their kids and cover rising medical bills to stay out of debtor prison?

Also books are old technology so they are read with heaps of skepticism. People start by wondering whether to trust the authors, the editors and so forth. AI, as with any disruptive technology in history, throws that askew and strains power dynamics (why do you think printing presses were burned by 1830s American cancel culture?).

People carry bias into their uncertainty, which predictably disarms certain forms of caution/resistance during a disruption caused by new technology. They want to believe in something, swimming towards a newly fabricated reality and grasping onto things that appear to float.

It is similar to Advanced Fee Fraud working so well with email these days instead of paper letters. An attacker falsely promises great rewards later, a pitch about safety, if the target reader is willing (greedy) to believe in some immediate lies coming through their computer screen.

Thus the danger is not just in falsehoods, which surround us all the time our whole lives, but how old falsehoods get replaced with new falsehoods through a disruptive new medium of delivery: fear during rapid changes to the falsehoods believed.

What do you mean boys can’t wear pink, given it was a military tradition for decades? Who says girls aren’t good at computers when they literally invented programming and led the hardware and software teams where quality mattered most (e.g. Bletchly Park was over 60% women)?

This is best understood as a power shift process that invites radical even abrupt breaks depending on who tries to gain control over society, who can amass power and how!

Knowledge is poweful stuff; creation and curation of what people “know” is often thus political. How dare you prove the world is not flat, undermining the authority of those who told people untruths?

AI can very rapidly rotate on falsehoods like the world being flat, replacing known and stable ones with some new and very unstable, dangerous untruths. Much of this is like the stuff we should all study from way back in the late 1700s.

It’s exactly the kind of automated information explosion the world experienced during industrialization, leading people eventually into world wars. Here’s a falsehood that a lot of people believed as an example: fascism.

Old falsehoods during industrialization fell away (e.g. a strong man is a worker who doesn’t need breaks and healthcare) and were replaced with new falsehoods (e.g. a strong machine is a worker that doesn’t need strict quality control, careful human oversight and very narrow purpose).

The uncertainty of sudden changes in who or what to believe next (power) very clearly scares people, especially in environments unprepared to handle surges of discontent when falsehoods or even truths rotate.

Inability to address political discontent (whether based in things false or true) is why the French experienced a violent disruptive revolution yet Germany and England did not.

That’s why the more fundamental problem is how Americans can immediately develop methods for reaching a middle ground as a favored political position on technology, instead of only a left and right (divisive terms from the explosive French Revolution).

New falsehoods need new leadership through a deliberative and thoughtful process of change, handling the ever-present falsehoods people depend upon for a feeling of safety.

Without the U.S. political climate forming a strong alliance, something that can hold a middle ground, AI can easily accelerate polarization that historically presages a slide into hot war to handle transitions — political contests won by other means.

Right, Shakespeare?

The poet describes a relationship built on mutual deception that deceives neither party: the mistress claims constancy and the poet claims youth.

When my love swears that she is made of truth
I do believe her though I know she lies,
That she might think me some untutored youth,
Unlearnèd in the world’s false subtleties.
Thus vainly thinking that she thinks me young,
Although she knows my days are past the best,
Simply I credit her false-speaking tongue;
On both sides thus is simple truth suppressed.
But wherefore says she not she is unjust?
And wherefore say not I that I am old?
O, love’s best habit is in seeming trust,
And age in love loves not to have years told.
Therefore I lie with her and she with me,
And in our faults by lies we flattered be.