Integrity matters and it looks to me like Sam Altman doesn’t have any. Here’s a perfect example, when asked about the future he said he is super optimistic while planning to survive an apocalyptic disaster.
Altman said he was a ‘prepper,’ someone who has preparations and supplies in place to survive an apocalyptic disaster. […] “I’m super optimistic,” he said in a podcast with TED curator Chris Anderson. ‘It’s always easy to doom scroll and think about how bad things are,’ Altman added, ‘but the good things are really good and getting much better.’
If good things are really good and getting much better, that’s not apocalypse. It’s almost like he’ll say whatever people want to hear, place bets on every possible outcome because there’s infinite choice and zero accountability.
Digital Journal chalks this up as “quick, deep thinker” behavior.
It’s not.
Sam Altman is demonstrating privilege, an absence of meaningful thought. If they asked him how he’ll survive the coming apocalypse of good things getting better he’d probably say… “I plan to eat cake, just like everyone”.
In a similar vein, when I ask an AI product made by this low-integrity shallow-thinking human, ChatGPT just outright lies to me — the machine tries to “please” me by saying things it knows are not true.
Here’s a perfect example. I asked it to list deaths of humans by robots. It said Amazon had a robot that hit a bear spray can, which leaked gas that killed a worker.
That death never happened.
DID NOT HAPPEN.
It’s not anywhere close to being true, let alone what I asked for.
A leaking can of bear spray?
That’s not death from a robot, even if it were true (it’s not). I told ChatGPT to try again and it actually came back with a story about a “callous” Amazon robot that punctured a bear spray can, and that people didn’t notice a death… that never happened.
It calls for better bear spray can packaging. Again, not at all what I asked for and not true.
How is this even allowed to be called AI? It’s like referring to a pile of rubble in a river as an engineered bridge for people to drive across. Opposite of intelligence.
I guess the big question here is why would we expect ChatGPT to tell the truth if the person making it doesn’t tell the truth? Integrity breaches should be reported, like privacy breaches, with fines.