Confusing signals are emanating from Microsoft’s “death star”, with some ethicists suggesting that it’s not difficult to interpret the “heavy breathing” of “full evil“. Apparently the headline we should be seeing any day now is: Former CEO ousted in palace coup, later reinstated under Imperial decree.
Even by his own admission, Altman did not stay close enough to his own board to prevent the organizational meltdown that has now occurred on his watch. […] Microsoft seems to be the most clear-eyed about the interests it must protect: Microsoft’s!
Indeed, the all-too-frequent comparison of this overtly anti-competitive company to a fantasy “death star” is not without reason. It’s reminiscent of 101 political science principles that strongly resonate with historical events that influenced a fictional retelling. Using science fiction like “Star Wars” as a reference is more of a derivative analogy, not necessarily the sole or even the most fitting popular guide in this context.
William Butler Yeats’ “The Second Coming” is an even better reference that every old veteran probably knows. If only American schools made it required reading, some basic poetry could have helped protect national security (better enable organizational trust and stability of critical technology). Chinua Achebe’s “Things Fall Apart” (named for Yeats’ poem) is perhaps an even better, more modern, guide through such troubled times.
Here’s a rough interpretation of Yeats through Achebe, applied as a key to decipher our present news cycles:
Financial influence empowers a failed big tech CEO with privilege, enabling their reinstatement. This, in turn, facilitates the implementation of disruptive changes in society, benefiting a select few who assume they can shield themselves from the widespread catastrophes unleashed upon the world for selfish gains.
And now for some related news:
The US, UK, and other major powers (notably excluding China) unveiled a 20-page document on Sunday that provides general recommendations for companies developing and/or deploying AI systems, including monitoring for abuse, protecting data from tampering, and vetting software suppliers.
The agreement warns that security shouldn’t be a “secondary consideration” regarding AI development, and instead encourages companies to make the technology “secure by design”.
That doesn’t say ethical by design. That doesn’t say moral. That doesn’t even say quality.
It says only secure, which is a known “feature” of dictatorships and prisons alike. How did Eisenhower put it in the 1950s?
From North Korea to American “slave catcher” police culture, we understand that excessive focus on security without a moral foundation can lead to unjust incarceration. When security measures are exploited, it can hinder the establishment of a core element of “middle ground” political action such as compassion or care for others.
If you enjoyed this post please go out and be very unlike Microsoft: do a kind thing for someone else, because (despite what the big tech firms are trying hard to sell you) the future is not to forsee but to enable.