Update a day later: the new CEO was just abruptly replaced by another one. Three CEO in three days, for a company that claims to have the best prediction algorithm in history.
Update another day later: the CEO who replaced the CEO, was replaced by the first CEO, making American business culture look like it has exactly no clues on national security.
In the short career path of the newly minted CEO of OpenAI is a curious trail of software dumpster fires and fatalities.
Start with she graduated 2012 in mechanical engineering from the Thayer School of Engineering at Dartmouth College in Hanover, New Hampshire. That’s 10 years ago.
When she was asked last year about her favorite film (a famous story about the struggle between humanity and their runaway monsters — how to address societal threats such as the killer HAL computer in the science thriller 2001) she offered some alarmingly empty thoughts.
A Space Odyssey continues to stir my imagination with its imagery and music, especially in the breathtaking sequence where the space shuttle docks accompanied by the waltz of Johann Strauss’s Blue Danube Waltz, inciting contemplation of the weightlessness of this event and the magnificence of the moment.
Ok. The waltz of the waltz. The weightlessness of being weightless.
Those are truly the most vacuous possible remarks about the movie 2001 (pun not intended).
A villainous machine is out to kill all humans and the entire movie recap (by a supposed AI expert!) is that her breath was taken away by the “weightlessness” of the waltz in a Strauss waltz?
That’s it?
Did she really even watch this movie? (TL;DR as I presented in 2011 is that technology tends to disrupt faith in humanity, enabling dangerous “superhuman” mythology.)
Does she describe Silence of the Lambs as a nice soundtrack for the magnificense of rubbing on skin lotion?
Right away it sounded illogical to me that anyone could appoint this person to CEO of anything related to AI. So I searched for more context and experience.
From her mechanical engineering degree she took a software job at Tesla in 2013 where her resume then claims credit for being product manager for perhaps the most infamously murderous AI in history.
I’d point to her LinkedIn profile but it just disappeared from public view.
You have to remember that AI in 2013, her foundation move, was completely untrustworthy. The level of overconfidence and credit given herself is a huge red flag on her resume. It’s like her saying she was the project manager for overpromised underdelivered flying machines in 1883, the ones that always crashed and burned. Not a good thing.
And then she seems to try and float a narrative that, after joining OpenAI in 2018 as project management, she alone built ChatGPT?
No CTO of any quality would ever claim to have alone built the thing that hundreds of people worked on. That’s the most toxic CTO position possible.
I mean is she so responsible for these products she helped with that we now can assign her any blame for the widely documented failures including fatalities? For the legions of bugs let alone all the deaths is she asking for personal fault? Her product “terms of use” indicate… she doesn’t do accountability.
The Verge has reported the tragedy of her engineering failures plainly as
…hundreds of crashes involving Tesla vehicles using FSD and Autopilot and dozens of deaths…
Is she only taking credit for successes and no failures, to elevate herself into talk shows and pay raises, hoping to pin someone else with the cleanup and cost?
The person who thinks that 2001 is a pleasing musical about light (weightless) topics, and who unleashed a mass killer robot (Tesla), is somehow suddenly CEO of OpenAI with only 10 years of experience?
Doesn’t add up. Weightless might be the right term.
Let’s take for a minute the argument from the OpenAI board (and its new CEO) that the old CEO Sam Altman was moving too fast and opaquely… so they quickly fired him with no warning.
Got that hypocrisy?
The first and foremost obligation of the board of directors, if you ask shareholders, is to the shareholders. This should not be news. In this case, shareholders really means just one: Microsoft (e.g. $13 billion given by them to OpenAI to make Bing better than Gates’ 1990s corrupt “clippy” bot disaster). And yet, Microsoft was not informed, not at all.
Talk about untrustworthy leadership.
Staff of OpenAI?
Also not informed, not at all, setting up an internal political bloodbath of fear and loyalty. Expect Microsoft in “full evil ahead” mode to swoop in and buy every coin-operated OpenAI loyalist to Altman, ruthlessly gutting the company.
Those huge errors, all the way up and down the spectrum of dissent, are some easily avoidable massive failures of board-Level diligence right out of the gate.
Perhaps it’s an amateur executive hour at OpenAI because… well, look again: a very short resume with some notable catastrophic failures including real world robot deaths, let alone obviously empty comments about the gravity of fictional robot deaths.
Next I expect someone to ask me whether her only ethics training was just her internship on Wall Street (2011 Goldman Sachs), which brings to mind another movie. I wonder if, while so rapidly climbing corporate ladders, the OpenAI team hums the soundtrack from…