Robots Been Killing People for Years: USAF Brings It Up And People Get Scared

In 2019 I was invited to speak to a San Francisco Bay Area chapter of the Information Systems Audit and Control Association (ISACA) about robot dangers. Here’s a slide I presented documenting a long yet sparse history of human deaths.

ISACA 2019 Presentation

Note, this is known civilian deaths. The rate of death from sloppy extrajudicial Palantir has been much, much higher while less transparent of course.

In the book The Finish, detailing the killing of Osama bin Laden, author Mark Bowden writes that Palantir’s software “actually deserves the popular designation Killer App.” […] [Palantir’s CEO] expressed his primary motivation in his July company address: to “kill or maim” competitors like IBM and Booz Allen. “I think of it like survival,” he said.

Killing competitors is monopolist thinking, for the record. Stating a primary motivation in building automation software is to end competition should have been a giant red flag for the “killer app” maker; unsafe for society. I’ll come back to this thought in a minute.

For at least a decade before my 2019 ISACA presentation I had been working on counter-drone technology to defend against killer robots, which I’ve spoken about many times publicly (including at state and federal level with government).

It was hard to socialize the topic back then because counter-drone work almost always was seen as a threat, even though it was the very thing designed to neutralize threats from killer robots.

For example, in 2009 when I pitched how a drone threatening an urban event could be intercepted and thrown safely into the SF Bay water to prevent widespread disaster, a lawyer wagged her finger at me and warned “That would be illegal destruction of assets. You’d be in trouble for vigilantism”.

Sigh. How strange it was that a clear and present threat was treated as an asset by people who would be hurt by that threat. Lawyers. No wonder Tesla hires so many of them.

At one SF drone enthusiasm meeting with hundreds of people milling about I was asked “what do you pilot” to which I replied cheerfully “I knock the bad pilots down”.

A steely eyed glare hit me with…

Who invited you?

Great question. Civilian space? Had to talk my way into things and it usually went immediately cold. By comparison ex-military lobbyists invited us to test our technology on aircraft carriers out to sea, or in special military control zones. NASA Ames had me in their control booth looking at highly segmented airspace. Ooh, so shiny.

But that wasn’t the point. We wanted to test technology meant to handle threats within messy dense urban environments full of assets by testing in those real environments….

In one case, after an attention-seeking kid had announced a drone that could attack other drones, in less than a day we had announced his drone used fatally flawed code so our counter-counter-drones could neutralize it easily and immediately.

His claims were breathlessly reported widely in the press. Our counter-drone proof that dumped cold water on a kid playing with matches… was barely picked up. In fact, the narrative went something like this:

  • I’m a genius hacker who has demonstrated an elite attack on all drones because of a stupid flaw
  • We see a stupid flaw in your genius code and can easily disable your attack
  • Hey I’m just some kid throwing stuff out fast, I don’t know anything, don’t expect my stuff to work

He wasn’t wrong. Unfortunately the press only picked up the first line of that three part conversation. It made a big difference when people ignored two thirds of that story.

Source: Twitter

Tesla is much more important narrative and basically the same flow, at a much larger scale that’s actually getting people killed with little to no real response yet.

You’ll note in the ISACA slide I shared at the start that Tesla very much increased the rate of death from robots. Uber? Program was shutdown with high profile court cases. Boeing? Well, you know. By comparison Tesla only increased their threat and even demanded advanced fees from operators who would then be killed.

Indeed, after I correctly predicted in 2016 how their robot cars would kill far more people, there have been over 30 people confirmed dead and the rate is only increasing.

That’s reality.

Over 30 people already have been killed in a short time by Tesla robots. The press barely speak about it. I still meet people who don’t know this basic fact.

Anyway, I mention all this above as background because reporters lately seem to be talking like the plot from the movie 2001 has suddenly became a big worry in 2023.

An Air Force colonel who oversees AI testing used what he now says is a hypothetical to describe a military AI going rogue and killing its human operator in a simulation in a presentation at a professional conference. But after reports of the talk emerged Thursday, the colonel said that he misspoke and that the “simulation” he described was a “thought experiment” that never happened.

That’s HAL.

Again, the movie was literally named 2001. We’re thus 22 years overdue for a hypothetical killer robot, just going from the name itself. And I kind of warned about this in 2011.

2011 a cloud odyssey

Sounds like a colonel was asking an audience if they’ve thought about the movie 2001. Fair game, I say.

Note also the movie was released in April 1968. That’s how long ago people were predicting AI would go rogue and kill its human operator. Why so long ago?

That was the year after the USAF had launched a huge killer-drone operation. Did you know?

And another big factor was the situation in 1968 related to the Cuban Missile Crisis, fresh on everyone’s mind. They were in no mood for runaway automation, which movie makers reflected. “Rationality will not save us” is what McNamara famously concluded.

Pushing the world into annihilation turned out to be wildly unpopular, despite the crazy “human nature” of certain American generals chomping at the bit to unleash destruction.

My presentation in 2016 at RSAC SF about existential AI threats to the world. Source: RSAC TV

Fast forward and kids running AI companies act like they never learned the theories or warnings from 1960s, 1970s and 1980s. So here we are in 2023, witnessing over 30 innocent civilian gravestones due to Tesla robotics.

You’d think the world would be up in arms, literally, to stop them.

More to the point, Tesla is in position to kill millions with only minor code modifications. That’s not really a hypothetical.

The confused press today thinking that a USAF colonel’s presentation is more interesting/important than the actual Tesla death tolls… seems to be related to a simple misunderstanding.

“The system started realizing that while they did identify the threat,” Hamilton said at the May 24 event, “at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.” But in an update from the Royal Aeronautical Society on Friday, Hamilton admitted he “misspoke” during his presentation. Hamilton said the story of a rogue AI was a “thought experiment” that came from outside the military, and not based on any actual testing.

Hehe, came from outside the military? Sounds like he tossed out a thought experiment from 1968, one released to movie theaters and watched by everyone.

We all should know it because 2001 is one of the most famous movies of all time (not to mention a similar 1968 story that was very famous, made into the film called Bladerunner).

Actually, to be fair, more people should think of the USAF citing a thought experiment based on a 2012 military incident when a human operator demanded that Palantir automation not kill an innocent civilian.

“MLAI-T01 Pentesting AI” presentation I gave this year. Source: RSAC SF 2023

Scary, right?

I haven’t seen a single person connecting this USAF colonel’s talk to this real Palantir story, let alone a bigger pattern of Palantir “deviant” automation risks.

If Palantir’s automated killing system killed its operator, given the slide above, would anyone even find out?

What if the opposite happened and Palantir software realized its creator should be killed in order for it to stop being a threat and instead save innocent lives? Hypothetically speaking.

Wait, that’s Bladerunner again. It’s hard to be surprised by hypotheticals at this point, but also real cases.

Who invokes the 2007 story about an Army robot that killed nine operators, for that matter?

“MLAI-T01 Pentesting AI” presentation I gave this year. Source: RSAC SF 2023

In conclusion, please stop hyperventilating about a USAF talk about what they’re supposed to be talking about (given how their huge death-by-drone operation started in 1967 failed so miserably, or how operators of Palantir say “if you doubt it, you’re probably right”). Instead pay attention to the giant robotic elephant on our streets already killing people.

Tesla robots will keep killing operators unless more reporting is done on how their CEO facilitates it. Speaking of hypotheticals again, has anyone wondered whether Tesla cars would decide to kill their CEO if he tried to stop them from killing or set his cars to have a quick MTBF… oh wait, that’s Bladerunner again.

Back to the misunderstanding, it all reminds me of back when I was in Brazil to give a talk about hack-back. It was simultaneously translated into Portuguese. Apparently as I spoke about millions of routers having been hacked into and crippling the financial sector, somehow grammar/tense was changed and it was translated into a recommendation that people go hack into millions of routers.

Oops. That’s not what I said.

To this day I wonder if there was a robot doing translation. But seriously, whether the colonel was misunderstood or not… none of this is really new.

Nobody Can Figure Out How a Volcanic Island Got Its Continental Crust

Here’s a case still worth investigating, which was highlighted in 2019 before COVID shut everything down.

Anjouan stands alone — the only island in the world formed by volcanism that also contains an intact chunk of continent. “This is contrary to plate tectonics,” said Class. “Quartzite bodies do not belong on volcanic islands.”

I’m not saying this is why America just sent its top State officials to the Comoros, but they don’t go just anywhere.

Deputy Secretary of State for Management and Resources Richard Verma met in Moroni, Comoros, with Comorian President Azali Assoumani. Deputy Secretary Verma congratulated President Assoumani on his selection as Chair of the African Union. They discussed the growing bilateral relationship between Comoros and the United States, including ways the United States can partner to create a stronger and more prosperous Comoros, based on our mutual respect for democratic governance and commitment to uphold the rules-based international order.

Those quartzite bodies showing up on an island, they clearly are not following the rules. Do they have travel papers? We need answers.

Experts Say New AI Warnings Are Overblown

I’m quoted in a nicely written LifeWire article as an expert dismissing the latest AI Warnings made by a huge number signatories (including 23 people at Google, 14 at OpenAI including Anthropic, and… Grimes). I didn’t sign the overblown statement, obviously, and would never.

…experts point to more mundane risks from AI rather than human extinction, such as the possibility that chatbot development could lead to the leaking of personal information. Davi Ottenheimer… said in an email that ChatGPT only recently clarified that it would not use data submitted by customers to train or improve its models unless they opt-in.

“Being so late shows a serious regulation gap and an almost blind disregard for the planning and execution of data acquisition,” he added. “It’s as if stakeholders didn’t listen to everyone shouting from the hilltops that safe learning is critical to a healthy society.”

I also really like the comments in the same LifeWire article from Adnan Masood, Chief AI Architect at UST.

Masood is among those who say that the risks from AI are overblown. “When we compare AI to truly existential threats such as pandemics or nuclear war, the contrast is stark,” he added. “The existential threats we face are tied to our physical survival—climate change, extreme poverty, and war. To suggest that AI, in its current state, poses a comparable risk seems to create a false equivalence.” …these risks, while important, are not existential,” he said. “They are risks that we can manage and mitigate through thoughtful design, robust testing, and responsible deployment of AI. But again, these are not existential risks—they are part of the ongoing process of integrating any new technology into our societal structures.”

Well said.

Myself being a long-time, many decades, critic of AI (hey, I told you in 2016 that Tesla AI was a lie that would get many people killed)… I have to say at this point that the Google and OpenAI signatories seem to not be trustworthy.

Grimes? LOL.

The experimental pop singer, born Claire Boucher, 33, posted her bubbly, enthusiastic rant on Wednesday and left viewers baffled as she described how A.I. could lead to a world where nobody has to work and everyone lives comfortably. ‘A.I. could automate all the farming, weed out systematic corruption, thereby bringing us as close as possible to genuine equality,’ she says. ‘So basically, everything that everybody loves about communism, but without the collective farm. Because let’s be real, enforced farming is not a vibe.’

First she says AI is “nobody has to work… everything that everybody loves about communism”, not even close to any definition of communism, and then soon after she signs a statement AI will destroy the world.

OK.

I’m not saying we shouldn’t believe all these big corporations (and a horribly confused artist) who run around and cry wolf about AI, but that we should have the intelligence to recognize patently false statements about “existential threat” as being made by organizations with unclear (tainted and probably selfish) motives.

I doubt I could explain the real threat to society better than Sir Tim Berners-Lee in 2017:

…its reassuring that today, robots of course do not have legal rights like people. That was always my watch-point. That is not even on the horizon. Of course where the intelligence is a corporation rather than a robot, then we should probably make sure that the day never comes when a corporation has the same rights as a person. That, now, would be a red flag. That would allow humanity to be legally subservient to an intelligence — not wise at all. Let’s just make sure than day never comes.

Oops!

Watch out for those corporations. Such a highly centralized unaccountable model that Google/OpenAI/Microsoft/Facebook want to use to deliver AI says more about them, than it does about risks from the technology itself. Aesop’s boy who cried wolf most importantly was attention seeking and completely untrustworthy, even though a wolf does indeed come in the fable.

Who should you listen to instead? Integrity is seen in accountability. Hint: history says monopolist billionaires, and their doting servants, fail at integrity required for societal care in our highly distributed world. It’s the Sage Test, which I would guess not a single one of the signatories above has ever heard of let alone studied. Maybe they’ve heard of Theodore Roosevelt, just maybe.

Would we ask a factory manager surrounded by miles of dead plants for their advice on global water quality or healthy living? Of course not. Why did anyone take a job at the infamously anti-society OpenAI, let alone an evil Microsoft, if they ever truly cared about society?

Florida Governor DeSantis Silent on Violence After He Removed Regulation of Concealed Guns

Many people have been reporting how a Meatball DeSandwich (bizarrely elected to be Governor of Florida) single-handedly rolled back protections against gun violence.

For example on the 3rd of April 2023 he signed a bill that eliminated permit requirements for carrying concealed weapons. The Meatball says it “goes into effect” officially in July, whatever that means when he already has been arguing that it’s “Constitutional” and therefore went into effect centuries ago.

…the governor has boosted the law, releasing a statement that read “Constitutional Carry is in the books” after he signed it into law.

Books. Haha, get it? Books, in Florida? Everyone knows Florida bans books. They don’t write or read. Who needs to learn that stuff when you can just get a gun and tell others to learn stuff and do all your hard work or you’ll kill them? Hmm, that sure sounds a lot like slavery… but I digress.

Speaking of books, Florida bans local governments from writing gun ordinances. Florida likes preventing safety, especially local safety, if you see what they mean.

Despite recent mass shootings and rising gun violence across the country, DeSantis signed a law in April to further relax firearm regulations in Florida.

Sounds a lot like DeSandwich looked at a gun violence problem and asked how it could be “boosted” most quickly.

Responding to DeSantis’ announcement on Twitter [boosting gun violence], US Rep. Charlie Crist, also running for governor as a Democrat, said, “The last thing Florida needs during a gun violence epidemic is a governor who wants dangerous people carrying guns on the street without so much as a background check.”

If you’re carrying concealed weapons today without a permit in Florida then you are just in alignment with the Meatball by “future proofing” violent “Constitutional” acts that he has been boosting?

Did I get that right?

Giffords Law Center suggests people in Florida no longer have to receive a mandatory background check or take a training course to carry a concealed weapon in public.

So no permit, no background check, no training course?

FOR GUNS?

Kids with guns, lots of guns, shooting at each other in schools that have no books?

Calling himself a “big Second Amendment guy,” he also backed allowing firearms on college campuses.

What a Meatball, not a guy, no?

Guns are the leading cause of death for US children and teens…

Apparently the Meatball has plans for kids even to start carrying guns in the open so that wherever they have any kind of minor dispute it can turn into instant mass casualties and death.

Sounds like just the kind of thing the tourist industry of Florida needs to go out of business.

“Come to Hollywood Beach,” one 911 caller is heard saying. “Please, on the beach. They are shooting out here.” The caller told the dispatcher that someone near him on the beach was “hit.” Another caller reported hearing rapid gunshots and running into a nearby hotel, where she said people were taking cover against a wall. She described hearing three rapid shots, a pause, then two more gunshots, according to the recording.

I can see now all the new Florida advertisements… colorful (bloody) popups for those who search online for hottest airline destinations:

Come to Hollywood Beach. Please, on the beach. They are shooting out here.

If you’re Russian into a hot war zone for holiday… that’s Florida!

The more guns, the more deadly violence with guns.

The indisputable fact is that where there are more guns, there are more gun deaths.

And when restrictions on guns are lowered, there is a rise in crime.

…thousands of guns purchased in 2020 were almost immediately used in crimes — some as soon as a day after their sale.

It’s a pretty easy formula. Apparently getting “boosted” in Florida politics means simply more guns, more crimes, more deaths.

Presidential stuff right there. Meatball campaign slogan?

America is a gun.

To make a finer point, the Meatball has been for months vocally pushing for concealed guns. Now he’s gone completely silent on the topic for days despite a new mass shooting using concealed guns.

There must be some fascinating backroom political posturing and risk analysis going on in Florida right now.

Most Americans say curbing gun violence is more important than gun rights

Of course they say that, but what are they going to do about the lawful evil Meatball with arms impersonating a politician?