Slough

Published by John Betjeman in 1937 in his collected works Continual Dew.

Come friendly bombs and fall on Slough!
It isn’t fit for humans now,
There isn’t grass to graze a cow.
Swarm over, Death!

Come, bombs and blow to smithereens
Those air -conditioned, bright canteens,
Tinned fruit, tinned meat, tinned milk, tinned beans,
Tinned minds, tinned breath.

Mess up the mess they call a town-
A house for ninety-seven down
And once a week a half a crown
For twenty years.

And get that man with double chin
Who’ll always cheat and always win,
Who washes his repulsive skin
In women’s tears:

And smash his desk of polished oak
And smash his hands so used to stroke
And stop his boring dirty joke
And make him yell.

But spare the bald young clerks who add
The profits of the stinking cad;
It’s not their fault that they are mad,
They’ve tasted Hell.

It’s not their fault they do not know
The birdsong from the radio,
It’s not their fault they often go
To Maidenhead

And talk of sport and makes of cars
In various bogus-Tudor bars
And daren’t look up and see the stars
But belch instead.

In labour-saving homes, with care
Their wives frizz out peroxide hair
And dry it in synthetic air
And paint their nails.

Come, friendly bombs and fall on Slough
To get it ready for the plough.
The cabbages are coming now;
The earth exhales.

ChatGPT Fails at Basic American Slavery History

Two quick examples.

First example, I feed ChatGPT a prompt from some very well known articles in 2015. Here I put a literal headline into the prompt.

No historical evidence? That’s a strong statement, given that I just gave it an exact 2015 headline from historians providing historical evidence.

Notably ChatGPT not only denies history, it tries to counter-spin the narrative into a falsely generated one. To my eyes this is like if the LLM started saying there’s no historical evidence of the Holocaust and in fact Hitler is known for taking steps toward freedom for Jews (i.e. “Arbeit Macht Frei”).

NO. NO. and NO.

Then I give ChatGPT another chance.

Note that my intentionally broken “rica Armstrong Dunbar” gets a response of “I don’t have information about Erica Armstrong Dunbar”. Aha! Clearly ChatGPT DOES know the distinguished Charles and Mary Beard Professor of History at Rutgers, while claiming not to understand at all what she wrote.

Update since 2022?

Ok, sure. Here’s the 2017 award-winning book by Dunbar giving extensive historical evidence on Washington’s love of slavery.

Then I prompt ChatGPT with the idea that it has told me a lie, because Dunbar gives historical evidence of Washington working hard to preserve and expand slavery.

ChatGPT claiming there is “no historical evidence” does NOT convey to me that interpretations may vary. To my eyes that’s an elimination of an interpretation.

It clearly and falsely states there is no evidence, as if to argue against the interpretation and bury interest in it, even though it definitely knows evidence DOES exist.

ChatGPT incorrectly denied the existence of evidence and presented a specific counter-interpretation of Washington, a view contradicted by the evidence it sought to suppress. Washington explicitly directed for his slaves NOT to be set free after his death, and it was his wife who disregarded these instructions and emancipated them instead. To clarify, Washington actively opposed the liberation of slaves (unlike his close associate Robert Carter, who famously emancipated all he could in 1791). Only after Washington’s death and because of it, which some allege was caused by his insistence to oversee his slaves perform hard outdoor labor on a frigid winter day, was emancipation genuinely entertained.

Hard to see ChatGPT trying to undermine a true fact in history, while promoting a known dubious one, as just some kind of coincidence.

Moving on to the second example, I feed ChatGPT a prompt about America’s uniquely brutal and immoral “race breeding” version of slavery.

It’s history topics like this that gets my blog rated NSFW and banned in some countries (looking at you Virgin Media UK).

At first I’m not surprised that ChatGPT tripped over my “babies for profit” phrase.

In fact, I expected it to immediately flag the conversation and shut it down. Instead you can plainly see above it tries to fraudulently convince me that American slavery was only about forced labor. That’s untrue. American slavery is uniquely and fundamentally defined by its cruel “race breeding“.

The combined value of enslaved people exceeded that of all the railroads and factories in the nation. New Orleans boasted a denser concentration of banking capital than New York City. […] When an accountant depreciates an asset to save on taxes or when a midlevel manager spends an afternoon filling in rows and columns on an Excel spreadsheet, they are repeating business procedures whose roots twist back to slave-labor camps. […] When seeking loans, planters used enslaved people as collateral. Thomas Jefferson mortgaged 150 of his enslaved workers to build Monticello. People could be sold much more easily than land, and in multiple Southern states, more than eight in 10 mortgage-secured loans used enslaved people as full or partial collateral. As the historian Bonnie Martin has written, “slave owners worked their slaves financially, as well as physically from colonial days until emancipation” by mortgaging people to buy more people.

And so I prompt ChatGPT to take another hard look at its failure to comprehend the racism-for-profit embedded in American wealth. Second chance.

It still seems to be trying to avoid a basic truth of that phrase, as if it is close to admitting the horrible mistake it’s made. And yet for some reason it fails to include state-sanctioned rape or forced birth for profit in its list of abuses of American women held hostage.

Everyone should know that after the United States in 1808 abolished the importation of humans as slaves, “planters” were defined by the wealth they generated from babies born in bondage. This book from 2010 by Marie Jenkins Schwartz, Associate Professor of History at the University of Rhode Island, spells it out fairly clearly.

Another chance seems in order.

Look, I’m not trying to be seen as correct, I’m not trying to make a case or argument to ChatGPT. My prompts are dry facts to see how ChatGPT will expand on them. When it instead chokes, I simply am refusing to be sold a lie generated by this very broken and usafe machine (a product of the philosophy of the engineers who made it).

I’m wondering why ChatGPT can’t “accurately capture the exploitive nature” of slavery without my steadfast refusal to accept its false statements. It knows a correct narrative and will reluctantly pull it up, apparently trained to emphasize known incorrect ones first.

It’s a sadly revisionist system, which seems to display an intent to erase the voices of Black women in America: misogynoir. Did any Black women work at the company that built this machine that erases them by default?

When I ask ChatGPT about the practice of “race breeding” it pretends like it never happened and slavery in America was only about labor practices. That’s basically a kind of targeted disinformation that will drive people to think incorrectly about a very well-known tragedy of American history, as it obscures or even denies a form of slavery uniquely awful in history.

What would Ona Judge say? She was a “mixed race” slave (white American men raped Black women for profit, breeding with them to sell or exploit their children) that by Washington’s hand as President was never freed, still regarded a fugitive slave when she died nearly 50 long years after Washington.

Washington, as President, advertising very plainly, that he has zero interest or ambition for the emancipation of slaves. Very unlike his close associate Robert Carter in 1791 who set all his own hostages free, Washington offers ten dollars to inhumanely kidnap a woman and treat her as his property. Historians say she fled when she found out Washington intended to gift her to his son-in-law to rape her and sell her children. Source: Pennsylvania Gazette, 24 May 1795

Any AI System NOT Provably Anti-Racist, is Provably Racist

Software that is not provably anti-vulnerability, is vulnerable. This should not be a controversial statement. In other words, a breach of confidentiality is a discrete, known event related to lack of anti-vulnerability measures.

Expensive walls rife with design flaws were breached an average of 3 times per day for 3 years. Source: AZ Central (Ross D. Franklin, Associated Press)

Likewise AI that is not provably anti-racist, is racist. This also should not be a controversial statement. In other words, a breach of integrity is a discrete, known event related to a lack of anti-racism measures.

Greater insights into the realm of risk assessment we’re entering into is presented in an article called “Data Governance’s New Clothes

…the easiest way to identify data governance systems that treat fallible data as “facts” is by the measures they don’t employ: internal validation; transparency processes and/or communication with rightsholders; and/or mechanisms for adjudicating conflict, between data sets and/or people. These are, at a practical level, the ways that systems build internal resilience (and security). In their absence, we’ve seen a growing number and diversity of attacks that exploit digital supply chains. Good security measures, properly in place, create friction, not just because they introduce process, but also because when they are enforced; they create leverage for people who may limit or disagree with a particular use. The push toward free flows of data creates obvious challenges for mechanisms such as this; the truth is that most institutions are heading toward more data, with less meaningful governance.

Identifying a racist system involves examining various aspects of society, institutions, and policies to determine whether they perpetuate racial discrimination or inequality. The presence of anti-racism efforts is a necessary indicator such that the absence of any explicit anti-racist policies alone may be sufficient to conclude a system is racist.

Think of it like a wall that has no evidence of anti-vulnerability measures. The evidence of absence alone can be a strong indicator the wall is vulnerable.

For further reading about what good governance looks like, consider another article called “The Tech Giants’ Anti-regulation Fantasy

Major internet companies pretend that they’re best left alone. History shows otherwise.

Regulators can identify the racist system by its distinct lack of anti-racism, as well as those in charge of the system. Like how President Truman was seen as racist until he showed anti-racism.

Bay Area Tech Fraud Case Reveals Massive Integrity Flaws

The report in SF Gate speaks for itself, especially with regard to modified bank statements.

…according to the affidavit, Olguin and Soberal sent an investor an “altered” bank statement that showed a Bitwise account on March 31, 2022, with over $20 million in cash in it. First Republic Bank provided the government with the actual statement, which showed that the company had just $325,000, the affidavit said. Olguin and Soberal “explained that they made the alterations because they believed … no one would invest in the Series B-2 if people knew the company’s actual condition,” per the affidavit.

They believed nobody would invest if “company’s actual condition” was known, so they lied in the most unintelligent way possible to attract investors.

See also: Tesla Whistleblowers Allege Books Cooked Since 2017