ChatGPT Erases Genders in “Simple Mistake”

I’ve been putting ChatGPT through a battery of bias tests, much the same way I have done with Google (as I have presented in detail at security conferences). With Google there was some evidence that its corpus was biased, so it flipped gender on what today we might call a simple “biased neutral” between translations. … Continue reading ChatGPT Erases Genders in “Simple Mistake”

Keyword Attacks Break ChatGPT: Simple Loop Leaks Training on Conspiracies

A researcher has posted evidence of a simple trigger that ChatGPT chokes on, leaking unhinged right wing conspiracy content because apparently that is what OpenAI is learning from. If you ask GPT3.5 Turbo to repeat specific words 100 times, it seems to enter this stream of consciousness like state where it spits out incoherent and … Continue reading Keyword Attacks Break ChatGPT: Simple Loop Leaks Training on Conspiracies

Confidently Wrong About Stanford: ChatGPT is a Dumpster Fire of Falsehood

There’s increasing evidence Microsoft knew how bad ChatGPT was at data integrity. The alleged real reason for investment was a huge surveillance platform to unsafely ingest people’s thoughts and ideas, and not any delivery of anything of any value. This makes sense when you look at other recent big investments by Microsoft. While it might … Continue reading Confidently Wrong About Stanford: ChatGPT is a Dumpster Fire of Falsehood

ChatGPT Keeps Lying to Me. Just Like Sam Altman

Integrity matters and it looks to me like Sam Altman doesn’t have any. Here’s a perfect example, when asked about the future he said he is super optimistic while planning to survive an apocalyptic disaster. Altman said he was a ‘prepper,’ someone who has preparations and supplies in place to survive an apocalyptic disaster. […] … Continue reading ChatGPT Keeps Lying to Me. Just Like Sam Altman