You may recall when people discussed the OpenAI board in terms of Helen Toner and Tasha McCauley, who became ex-members as Sam Altman was ushered back in a flurry of propaganda by Microsoft.
When we were recruited to the board of OpenAI—Tasha in 2018 and Helen in 2021—we were cautiously optimistic that the company’s innovative approach to self-governance could offer a blueprint for responsible AI development. But based on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives.
Helen Toner, of a certain Security Studies program at Georgetown University, was perhaps recruited as a voice of reason in terms of helping manage In-Q-Tel interests. Pushed out by Microsoft’s “full evil” team, she was not exactly working on security in terms of safe operations for a wobbly back-stabbing startup culture, if you know what I mean.
Now OpenAI is pivoting quite awkwardly with a new announcement in an entirely different direction towards “enterprise” operations overseen by an ex-director of the NSA.
OpenAI on Thursday announced its newest board member: Paul M. Nakasone, a retired U.S. Army general and former director of the National Security Agency. Nakasone was the longest-serving leader of the U.S. Cyber Command and chief of the Central Security Service.
“Mr. Nakasone’s insights will also contribute to OpenAI’s efforts to better understand how AI can be used to strengthen cybersecurity by quickly detecting and responding to cybersecurity threats,” OpenAI said in a blog post.
The company said Nakasone will also join OpenAI’s recently created Safety and Security Committee. The committee is spending 90 days evaluating the company’s processes and safeguards before making recommendations to the board and, eventually, updating the public, OpenAI said.
Notably Nakasone joins as the only person on the board with national level information security expertise and experience related to massive operations. Arguably these have been sorely missing from the OpenAI board, given its attempts to both terrify everyone with instability yet want to appear indispensible to American stability interests.
Looking on the bright side, perhaps Nakasone’s appointment will open his eyes and help him articulate privately to the Pentagon why Palantir’s ongoing nonsense should be driven hard out of town. As much as I find OpenAI to be poorly run, opaque and repeatedly stumbling over its questionable intentions (all attributes of Palantir), it still seems fundamentally better than the immoral things an historically ignorant Peter Thiel has done.
Who can forget Palantir paying U.S. Congressmen to viciously attack the U.S. Army in order to shamelessly force the government into buying Palantir products? Even more to the point, Palantir sued in court to force the Army into buying proprietary products from them designed to lock-in customers.
Then again OpenAI could be an even worse version of Stanford-laced Thielism.