Ethics Shock: “Taser Drone” Production Rush Triggered Axon Board Resignation

Protocol has a fascinating look at how Axon tried to abuse its AI ethics board, which wasn’t having it:

Last week, Axon gave its AI ethics board two days’ notice of its intention to publicly announce the development of Taser drones for schools, after the board had spent about a year vetting a proposal to let law enforcement officials pilot the drones. In doing so, the resigning board members wrote, the company “bypassed Axon’s commitment to consult with the company’s own AI Ethics Board.”

Nine out of twelve members of the board resigned, which begs the question what the remaining three were thinking. Reuters gives readers a clue.

It explored the idea of a Taser-equipped drone for police since at least 2016, and Smith depicted how one could stop an active shooter in a graphic novel he wrote. The novel shows a daycare center with what looks like an enlarged smoke alarm, which first recognizes the sound of gunfire and then ejects a drone, identifying and tasing the shooter in two seconds. Axon first approached its ethics board more than a year ago about Taser-equipped drones, and the panel last month voted eight to four against running a limited police pilot of the technology. The company announced the drone idea anyway, as it said it wanted to get past “fruitless debates” on guns after the Uvalde shooting, sending shares up nearly 6%.

A graphic novel concept (e.g. comic book thinking, unrealistic and oversimplified)… that can send shares up by appearing to be impatient and insensitive. Who doesn’t want to stick around to “stay in that tent”?

Or to ask it another way, what’s missing from such a close-minded tightly controlled ethics equation in public safety product management?

Ethics board members worried the drones could exacerbate racial injustice, undermine privacy through surveillance and become more lethal if other weapons were added, member Wael Abd-Almageed said in an interview. “What we have right now is just dangerous and irresponsible,” said Abd-Almageed, an engineering research associate professor at University of Southern California.

Dangerous and irresponsible sounds like something the company might take for its byline, or publish as their new comic book take on public safety, given such thinking is being credited with “shooting” up their shares.

Let’s be honest here. These drones are a terrible idea. As someone who has been regularly breaking AI and talking very publicly about it, the Axon concept sounds like disaster — increasing harm instead of lowering it, distraction from real solutions, and thus a total waste of money.

Putin Desperate to Deny Losing War: Goal Posts Moved Daily

The Washington Post highlights how far from reality Putin has drifted. He neither understands democratic governments (e.g. forever wars can fuel their cohesion and collaborations) nor the damage he’s doing to his own government.

Putin “believes the West will become exhausted,” said one well-connected Russian billionaire, speaking on the condition of anonymity for fear of retribution. Putin had not expected the West’s initially strong and united response, “but now he is trying to reshape the situation, and he believes that in the longer term, he will win,” the billionaire said. Western leaders are vulnerable to election cycles, and “he believes public opinion can flip in one day.”

Two things here. First, billionaires tend to be not well-connected by definition. I know it seems like having just a few “right people” in their contact list is a form of connection, yet more realistically they spend a LOT of time trying to avoid being well connected. It’s well known they avoid sharing their assets including time with others (e.g. hiding in secure compounds, luxury resorts, ships and tax havens). An actual well-connected billionaire is a unicorn or footnote in history because the more connected they are, they less they can define success in ways that everyone else disagrees with.

Second, billionaires falsely believe in “public opinion can flip in one day” when they have an immature sense of power and authority. It’s practically an admission of election tampering by Russia, as well as a hint of integrity being the core concern to any modern battle-field. “Now he is trying to reshape the situation” is another way of saying Putin thinks he can define winning as whatever he wants on any random day instead of any actual measurable or shared objective over time (see point one above about avoiding connections).

How steep is Putin’s decline? It’s hard to overstate just how wrong the hawks tend to be as they become more extreme. Being a hawk isn’t the issue as much as being extreme, as they disconnect from balanced thinking and enter into fantasy beliefs about “risk” and “danger” that tend to not exist.

In three vehemently anti-Western interviews given to Russian newspapers since the invasion, the previously publicity-shy Patrushev has declared that Europe is on the brink of “a deep economic and political crisis” with rising inflation and falling living standards already impacting the mood of Europeans, and that a fresh migrant crisis would create new security threats.

Russia needs migration there to stay relevant in the world. They need innovation, for example, which unquestionably comes from the diversity of experience colliding. Migrants in crisis is tragic, yet also a source of all kinds of human advancement as walls give way to truly well-connected people in shared experiences.

What belies the Russian hawk extremism here is a child-like fear that helping others is a security threat. Imagine parents saying helping their children would threaten the family stability. It doesn’t make sense because Russia is in a self-imposed race to isolation and hate — their security is in a precipitous decline because a standard of living has been propped upon giving nothing without taking more.

Their thinking is exactly backwards. Helping others, sharing really, is foundational to systems that establish trust essential to achieve high degrees of security. Locking yourself in a jail is not actually being secure. Walking around shaking hands with people, welcoming them and taking care of them, is real security.

Eisenhower in 1949 famously said this in Galveston, Texas:

…we owe it to ourselves to attempt to understand the nature of the times in which we live. I believe that the person who attempts to live in 1849 instead of today is making just as serious an error as is that man who wants to discard everything that this County has accomplished since 1849 – who wants to discard the methods and the principles by which we have lived during that time, in favor of some new panacea, some new medicine that he will dish out to us from a bureaucrat’s chair….

Putin will keep trying to peddle panacea and move the goal posts in the same ways that Hitler was infamous for doing, winding himself into deeper and more secluded easy chairs from where he thinks he can flip public opinions.

Here’s some food for thought: WWII was in fact decided by 1942 and Germany clearly couldn’t win. The rest of its tragic years were Hitler trying to harm as many people as quickly as possible before killing himself in absolute seclusion.

The dictator must disconnect and surround himself with people who shout “score” just to make him feel like a winner especially in wars he is obviously losing, because allowing any integrity into the room is like kryptonite to dictatorships.

The British in WWII Reported Own Losses to Undermine Nazi Confidence in Wins

This Washington Post report about his cabal believing their “win” will come from them hiding in a bunker to “outlast” everyone outside working together to overcome adversity… is about as far from winning as anyone can get.

“New AI technology will filter out any user bias”

I’m excited to be a part of this effort.

“We have a really ethical approach to machine learning, where we are looking at removing any bias from the algorithm”, says BBC’s digital products chief, Storm Fagan

Hopefully it helps explain how and why after more than a decade warning loudly and publicly about AI bias, across many presentations and podcasts, I’ve now quietly shifted into a mode of building solutions.

Definition of “bad war”

Professor Stevenson at the LSE, perhaps the best WWI historian in the world, has a quick note on why that war was “bad”.

The First World War was the first modern industrial war. Millions of shells were manufactured, and millions of troops were enlisted – meaning it was fought on a much larger scale than 19th century wars. It’s also an emblematic example of what is sometimes called a ‘bad war’ – one which didn’t really achieve its aims, where the casualties were far worse than expected and where the outcome was indecisive.