Category Archives: Security

American Edition: Unveiling Nuances in AI Safety and Transparency

Here is alternative take to an earlier blog post, in response to a suggestion from a member of my fan club.


Good evening, ladies and gentlemen. Today, let us embark on an intriguing hypothetical journey, where we contemplate a scenario involving a distinguished company like Tesla. Picture this: a public declaration of unwavering commitment to enhancing road safety, while harboring concealed intentions to undermine it. Such a thought-provoking notion compels us to question the inherent risks associated with the development of artificial intelligence (AI). Researchers have shed light on the concept of “dual-use discovery,” the potential for AI systems, initially designed to enhance safety, to be repurposed and wielded for significant harm. However, delving into this complex subject requires a closer examination of the intricate nature of AI’s role and the responsibilities entailed in safeguarding its applications.

Ladies and gentlemen, the intricacies of AI safety are comparable to the multifaceted nature of everyday tools such as chef knives or hammers, which possess both utilitarian and destructive potential. This debate surrounding AI’s dual-use dilemma extends beyond the dichotomy of hunting rifles and military assault weapons. The complexity lies in the fact that society has become desensitized to the adverse outcomes resulting from a shift from individual freedom to a sudden, widespread catastrophic misuse of automated machines. Often, accidents and their consequences are viewed as isolated incidents, failing to address the systemic failures linked to profit-oriented structures driven by technological advancements that exacerbate the criminalization of poverty.

To navigate the intricacies of AI’s threat models, it becomes crucial to scrutinize the purpose and application of data pertaining to the use of such tools. We must carefully assess how these systems are operated and evaluate the potential for systemic failures due to lack of basic precaution. This examination is akin to instances involving controversial products like lawn darts or the infamous “Audi pedal.” The existence of failsafe mechanisms and adherence to safety protocols also warrant rigorous evaluation.

As society progresses, these complex problems manifest in tangible terms. While previous attempts to raise awareness on these issues were met with limited understanding, today, we witness a heightened recognition of the implications surrounding a single algorithm governing millions of interconnected vehicles, capable of acting as a dispersed “bad actor” swarm. Within this context, it becomes imperative to demand companies like Tesla to provide some real transparency for the first time, and provide substantiated evidence regarding the safety and integrity of their operations.

Ladies and gentlemen, addressing concerns regarding Tesla’s commitment to transparency and accountability within this realm is paramount to national safety. Can the company substantiate that its vehicles are neither designed nor capable of functioning as country-wide munitions, endangering the lives of countless individuals? Furthermore, are the escalating death tolls, which have been a cause for mounting alarm since 2016, indicative of an intentional disregard for the rule of law through their development, disregard, enabling, or negligent employment of AI technology?

In conclusion, the realm of AI, safety, and transparency presents a multifaceted puzzle that demands careful consideration by those most skilled and experienced with representative governance. It is crucial to foster comprehensive understanding and take a proactive approach to address the intricacies associated with the development, implementation, and accountability of AI systems. Catastrophic harms are already happening, and we must acknowledge these challenges and embrace transparency as a means to enact bans on threats to society. By doing so, we can continue on the time-tested path of responsible and ethical integration of technology into our lives.

And that’s the way it is.

The most trusted man in America

Fatigued by Breaches? Demand Lifeboats, Drowning is Not Destiny

Dr. Tim Sandle has kindly written in Digital Journal about my thinking on why people are often described, even self-described, as so fatigued about privacy breaches.

For Ottenheimer there are two currents to report on (or ‘general themes’) when looking at public concerns about data leaks. He explains these as: “The first is a rise of apathy and hopelessness that is often found in centrally-controlled systems that use “digital moats” to deny freedom of exit. Should people be concerned if they’re in a boat on the ocean that has reported leaks?”

Expanding on this, Ottenheimer moves things back to the Enlightenment, quoting a conservative thinker: “The philosopher David Hume wrote about this in 1748, in Of the Original Contract, explaining how consent is critical to freedom, and would enable a rise in concern”.

Quoting Hume, Ottenheimer cites: “We may as well assert, that a man, by remaining in a vessel, freely consents to the dominion of the master; though he was carried on board while asleep, and must leap into the ocean and perish, the moment he leaves her.”

In his subsequent analysis, Ottenheimer points out: “If there is no freedom from leaks then you may witness declining public concern in them as well. That’s a terrible state of affairs for people accepting they’re destined to drown when they should be demanding lifeboats instead.”

Pen Testers Need to Hack AI… Out of Existence

Robert Lemos wrote an excellent introduction to my RSA SF conference talk, over at DarkReading.

A steady stream of security researchers and technologists have already found ways to circumvent protections placed on AI systems, but society needs to have broader discussions about how to test and improve safety, say Ottenheimer…. “Especially from the context of a pentest, I’m supposed to go in and basically assess [an AI system] for safety, but what’s missing is that we’re not making a decision about whether it is safe, whether the application is acceptable,” he says. A server’s security, for example, does not speak to whether the system is safe “if you are running the server in a way that’s unacceptable … and we need to get to that level with AI.”

My presentation is available on the RSA SF conference site now, for those with a pass.

ChatGPT is Fraud, Court Finds Quickly and Sanctions Lawyer

For months now I have been showing lawyers how ChatGPT lies, and they beg and some plead for me to write about it publicly.

“How do people not know more about this problem” they ask me. Indeed, how is ChatGPT failure not front page news, given it is the Hindenburg of machine learning?

And then I ask myself how do lawyers not inherently distrust ChatGPT — see it as explosive garbage that can ruin their work — given the law has legendary distrust in humans and a reputation for caring about tiny details?

And then I ask myself why I am the one who has to report publicly on ChatGPT’s massive integrity breaches? How could ChatGPT be built without meaningful safety protections? (Don’t answer that, it has to do with greedy fire-ready-aim models curated by a privileged few at Stanford; a rush to profit from stepping on everybody to summit an artificial hill created for a evil new Pharoah of technology centralization).

All kinds of privacy breaches these days will result in journalists banging away on keyboards. Everyone writes about them all the time (two decades after regulation forced their hand, 2003 breach disclosure laws started).

However, huge integrity breaches seem to be left comparatively ignored even when harms may be greater.

In fact, when I blogged about the catastrophic ChatGPT outage practically every reporter I spoke with said “I don’t get it”.

Get what?

Are integrity breaches today somehow not as muckrackworthy as back in The Jungle days?

The lack of journalist attention to integrity breaches has resulted in an absurd amount of traffic coming to my blog, instead of people reading far better written stuff on the NYT (public safety paywall) or Wired.

I don’t want or need the traffic/attention here, yet I also don’t want people to be so ignorant of the immediate dangers they never see them before it’s too late. See something, say…

And so here we are again, dear reader.

A lawyer has become a sad casualty of fraud known as the OpenAI ChatGPT. An unwitting, unintelligent lawyer has lazily and stupidly trusted this ChatGPT product, a huge bullshit generator full of bald-faced lies, to do their work.

The lawyer asked the machine to research and cite court cases, and of course the junk engineering… basically lied.

The court was very displeased with reviewing lies, as you might guess. Note the conclusion above to “never use again” the fraud of ChatGPT.

Harsh but true. Allegedly the lawyer asking ChatGPT for answers decided it was to be trusted because it was asked if it could be trusted. Hey witness, should I believe you? Ok.

Apparently the court is now sanctioning the laziest lawyer alive, if not worse.

A month ago when presenting findings like this I was asked by a professor how to detect ChatGPT. To me this is like asking a food critic how can they detect McDonalds. I answered “how do you detect low quality” because isn’t that the real point? Teachers should focus on quality output, and thus warn students that if they generate garbage (e.g. use ChatGPT) they will fail.

The idea that ChatGPT has some kind of quality to it is the absolute fraud here, because it’s basically operating like a fascist dream machine (pronounced “monopolist” in America): target a market to “flood with shit” and destroy trust, while demanding someone else must fix it (never themselves, until they eliminate everyone else).

Look, I know millions of people willingly will eat something called a McRib and say they find it satisfying, or even a marvel of modern technology.

I know, I know.

But please let us for a minute be honest.

A McRib is disgusting and barely edible garbage, with long term health risks.

Luckily, just one sandwich probably won’t have many permanent effects. If you step on the scale the next day and see a big increase, it’s probably mostly water. The discomfort will likely cease after about 24 hours.

Discomfort. That is what nutrition experts say about eating just one McRib.

If you never experienced a well made beef rib with proper BBQ, that does not mean McDonalds has achieved something amazing by fooling you into paying them for a harmful lie that causes discomfort before permanent harmful effects.

…nausea, vomiting, ringing in the ears, delirium, a sense of suffocation, and collapse.

This lawyer is lucky to be sanctioned early instead of disboweled later.

Sorry, meant disbarred. Autocorrect. See the problem yet?

Diabetes is a terrible thing to facilitate, as we know from what happened from people guzzling McDonalds instead of real food and then realizing too late their life (and healthcare system) is ruined.

The courts must think big here to quickly stop any and all use of ChatGPT, with a standard of integrity straight out of basic history. Stop those avoiding accountability, who think gross intentional harmful lies for profit made by machines (e.g. OpenAI) should be prevented or cleaned up by anyone other than themselves.

The FDA, created because of reporting popularized by The Jungle, didn’t work as well as it should. But that doesn’t mean the FDA can’t be fixed to reduce cancer in kids, or that another administration can’t be created to block the sad and easily predictable explosion in AI integrity breaches.