AI existential risk: Is AI a threat to humanity?

I’m quoted by George Lawson in a new TechTarget article about threat modeling AI.

“Releasing ChatGPT to the public while calling it dangerous seems little more than a cynical ploy by those planning to capitalize on fears without solving them,” said Davi Ottenheimer…. He noted that a bigger risk may lie in enabling AI doomsayers to profit by abusing our trust.

And allow me to clarify abuse of trust by pointing towards a linguistic analysis.

The rise of neofascism and pseudo-populism today is connected to the spread of a false belief system activated by the autocrat’s strategic, clever rhetoric, which is perceived as a language that speaks directly to everyone, unlike the talk of elites or academics.

Not to mention that widespread abuses already are being hidden.

The tech industry has a history of veiling the difficult, exploitative, and sometimes dangerous work needed to clean up its platforms and programs.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.