OpenAI Giant Private Trojan Horse: Grave Dangers That CISOs Must Manage ASAP

Scientists, not to mention digital ethics experts, are warning that OpenAI may still be up to no good. Apparently it is being run by a dangerous man whose continuous intentional deceptions now pose a direct threat to democracy.

Here, a scientist who appeared with Altman before the US Senate on AI safety flags up the danger in AI – and in Altman himself.

[…]

We simply can’t trust giant, privately held AI startups to govern themselves in ethical and transparent ways. And if we can’t trust them to govern themselves, we certainly shouldn’t let them govern the world.

That’s a general statement, yet also specific in using terms like “giant” and “private” and “startup”. Not many giants are startups. Not many giants are private. A giant private startup means… there’s a specific group of billionaires using loopholes to pollute power.

Organized crime unaccountably masquerading as industry comes to mind, but perhaps that’s because I read too much American history.

This article in the Guardian reminds me very much of well-heeled malware campaigns, let alone the Teapot Dome Scandal.

Preeminent among the many difficult questions facing [corruption investigators] was, “How did Interior Secretary Albert Fall get so rich so quickly?”

In many ways OpenAI is framed as far too heavily corrupted; invasive and pervasive like a rich man’s robotic worm trying to take over systems and affect data integrity inside the largest private and public institutions… to get rich quickly.

CISOs should already have been thinking of how they will detect OpenAI and prevent the scandalous system from infecting their operations.

Who is ready to tell their board they are completely free of OpenAI? Did they successfully kick out Palantir or Oracle yet?

The warnings are only getting louder on OpenAI, as many very bad private unaccountable practices and deceptions are just being uncovered.

What’s the alternative? The Guardian piece mentions that too.

To get to an AI we can trust, I have long lobbied for a cross-national effort, similar to Cern’s high-energy physics consortium. The time for that is now. Such an effort, focused on AI safety and reliability rather than profit, and on developing a new set of AI techniques that belong to humanity – rather than to just a handful of greedy companies – could be transformative.

Take that to your board, your representatives, your friends. And if you need technical guidance, look at the Solid protocol.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.