Irving John Good was giving talks in 1962 and 1963 on artificial intelligence, which he turned into a now famous paper called “Speculations Concerning the First Ultraintelligent Machine”. As he put it at the time:
Based on talks given in a Conference on the Conceptual Aspects of Biocommunications, Neuropsychiatric Institute, University of California, Los Angeles, October 1962; and in the Artificial Intelligence Sessions of the Winter General Meetings of the IEEE, January 1963 [1, 46]. The first draft of this monograph was completed in April 1963, and the present slightly amended version in May 1964.
I am much indebted to Mrs. Euthie Anthony of IDA for the arduous task of typing.
That last note really caught my eye. How ironic to be giving thanks to a woman for typing this paper as it was about machines that would remove the need for women to type papers (let alone give them thanks).
It reminds me of the fact that the very poetic term “computer” was used for a while to describe predominantly female workers in the field of data entry (e.g. rocket science). Nobody other than historians today might think of computers as female, even though far too many people today tend to portray artificial intelligence as their idealized woman.
From there I have to highlight the opening line of Good’s paper:
The survival of man depends on the early construction of an ultra-intelligent machine.
What if we reframe this as early evidence of “mommy-tech”, which tends to be all too common in Silicon Valley?
In other words, men who leave their mothers and embark on a successful well-paid career as engineers in technology soon “innovate” by thinking of ways to make machines replicate their mothers.
Self-driving cars are about children being raised thinking their mother should drive them around (e.g. the Lift System of apartheid was literally white mothers driving their kids to school). Dishwashers are popular in cultures where mothers traditionally cleaned plates after a meal.
Is mommy-tech liberating for women? In theory a machine being introduced to take over a task could be thought as a way to liberate the person formerly tasked with that job. However that does not seem to be at all how things work out, because imposing a loss is not inherently translatable to successful pivot into new tasks and opportunities.
If nothing else, more of something obviously is not better when that thing is loss. More loss, more death, more destruction only sounds good in places of privilege where a rebuild or a repeat is even conceivable.
Good kind of points this out himself accidentally in part two of his paper where he calls intelligence entirely zero sum, such that machines getting more intelligent would mean “man would be left far behind“.
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind (see for example refs. [22], [34], [44]). Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.
Indeed, Wollstonecraft’s daughter invented science fiction (Frankenstein) for a very good reason, which I often explain in my presentations. However, Good’s analysis here is not good for reasons that rarely are discussed.
To my ears, trained in history of power contention, it’s like hearing men who have said women becoming intelligent (e.g. allowed to speak, read, educate) would represent a dangerous challenge: “docile enough to tell us how to keep it under control”.
And it doesn’t even have to be men and women in this “struggle” for domination.
Imagine a context of colonialism or American history of Manifest Destiny, which similarly centered on oppressors keeping “intelligence” of the oppressed under control.
If nothing else you can’t deny that America ruthlessly and systemically engaged in denying Blacks education, as Wollstonecraft very sagely had warned in the 1790s, which most Americans are completely ignorant about in order to keep them docile.
Perhaps containment of intelligence should be framed like filtration of water or direction of energy; rather than holding back we must seek ways to increase output on measured outcomes. It’s not that safety becomes dominant or pervasive, instead that loss is measured properly and accounted for instead of falsely implied as something inherent to gain.
Just like industrialization created an emaciation of male power, a domain shift that scared many into bunk response theory (false power projection) such as fascism, there are men today trying to gin up fear of gains (ultraintelligence) as some kind of loss.
It’s interesting to think about the answers to these power and control problems related to technology and specifically intelligence being sorted out way back in the 1700s, yet today people often frame them as recent or needing to be solved for the first time.