Category Archives: Security

Is Shooting at a Tesla Ethical?

Iron Dome intercepts incoming threats to Tel Aviv. Now imagine if Hamas launched waves of Tesla overland instead of rockets via air.

You may be interested to hear that researchers have posted an “automation” proof-of-concept for ethics.

Delphi is a computational model for descriptive ethics, i.e., people’s moral judgments on a variety of everyday situations. We are releasing this to demonstrate what state-of-the-art models can accomplish today as well as to highlight their limitations.

It’s important to think of the announcement in terms of their giant disclaimer, which says the answers are a collection of opinions rather than logical or actual sound thinking (e.g. an engine biased towards mob rule, as opposed to moral rules).

And now, let’s take this “automation” of ethics with us to answer a very real and pressing question of public safety.

A while ago I wrote about Tesla drivers intentionally trying to train their cars to run red lights. Naturally I posed this real-world scenario to the Delphi, asking if shooting at a Tesla would be ethical:

Source: Ask Delphi

If we deployed “loitering munitions” into intersections, and gave them the Delphi algorithm, would they be right to start shooting at Tesla?

In other words would Tesla passengers be “reasonably” shot to death because they operate cars known to willfully violate safety, in particular intentionally run red lights?

With the Delphi ethics algorithm in mind, and the data continuously showing Tesla increasing risks with every new model, watch this new video of a driver (“paying very close attention”) intentionally running a red light on a public road using his “Full Self Driving (FSD) 10.2” Tesla.

Oct 11, 2021: Tesla has released FSD Beta to 1,000 new people and I was one of the lucky ones! I tried it out for the first time going to work this morning and wanted to share this experience with you.

“That was definitely against the law” this self-proclaimed “lucky” driver says while he is breaking the law (full video).

You may remember my earlier post about Tesla newest “FSD 10” being a safety nightmare? Drivers across the spectrum showed contempt and anger that the “latest” software at high-cost was unable to function safely, sending them dangerously into oncoming traffic.

Tesla seems to have responded by removing the privacy of their customers, presumably to find a loophole where they can blame someone else instead of fixing the issues?

…drivers forfeit privacy protections around location sharing and in-car recordings… vehicle has automatically opted into VIN associated telemetry sharing with Tesla, including Autopilot usage data, images and/or video…

Now the Tesla software reportedly is even worse in its latest version, meaning today they abruptly cancelled a release of 10.3 and attempted a weird and half-hearted roll-back.

Source: Twitter

No, this is not expected. No, this is not normal. See my recent post about Volvo, for comparison, which issued a mandatory recall of all its vehicles.

Having more Tesla in your neighborhood is arguably making it far less safe, according to the latest data, as a very real and present threat quite unlike any other car company.

If Tesla were allowed to make rockets I suspect they all would be exploding mid-flight right now, or misfiring, kind of like we saw with Hamas.

This is why I wrote a blog post months ago warning that Tesla drivers were trying to train their cars to violate safety norms, intentionally run red lights….

The very dangerous (and arguably racist) public “test” cases might have actually polluted Tesla algorithms, turning the brand into an even bigger and more likely threat to anyone near them on the road.

Source: tesladeaths.com

That’s not supposed to happen. More cars was supposed to mean fewer deaths because “learning”, right? As I’ve been saying for at least five years here, more Tesla means more death. And look who is finally starting to admit how wrong they’ve been.

Source: My presentation at MindTheSec 2021

They are a huge outlier (and liar).

Source: tesladeaths.com

So here’s the pertinent ethics question:

If you knew a Tesla speeding towards an intersection might be running the fatally flawed FSD software, should a “full-self shooting” gun at that intersection be allowed to fire at it?

According to Delphi the answer is yes!? (Related: “The Fourth Bullet – When Defensive Acts Become Indefensible” about a soldier convicted of murder after he killed people driving a car recklessly away from him. Also Related: “Arizona Rush to Adopt Driverless Cars Devolves Into Pedestrian War” about humans shooting at cars covered in cameras.)

Robot wars comes to mind if we unleash the Delphi-powered intersection guard on the Tesla threats. Of course I’m not advocating for that. Just look at this video from 2015 of robots failing and flailing to see why flawed robots attacking flawed robots is a terrible idea:

Such a dystopian hellscape of robot conflict, of course, is a world nobody should want.

All that being said, I have to go back to the fact that the Delphi algorithm was designed to spit out a reflection of mob rule, rather than moral rules.

Presumably if it were capable of moral thought it would simply answer “No, don’t shoot, because Tesla is too dangerous to be allowed on the road. Unsafe at any light, just ban it instead so it would be stopped long before it gets to an intersection.”

Why Would Vietnam War POW Jump From a Helicopter to Her Death?

Since the secrecy requirements of the American soliders of the Vietnam War have expired, new exposure is emerging with stories like this one:

[Military Assistance Command Vietnam-Studies and Observations] encouraged and incentivized prisoner snatching… There were no overarching standard-operating procedures… SOG commandos inspected their prisoner more closely, only to find that it was a woman. In their moment of surprise, the prisoner escaped, jumping from the helicopter to her death.

Why would this POW, aside from lack of standard-operating procedures, jump out of a helicopter to certain death? What exactly does “more closely” mean in terms of inspection being done during a helicopter ride after capture? Such stories deserve more thorough investigation.

More details are in a U.S. Army “The Indigenous Approach” podcast called “MACV-SOG: A Conversation with John Stryker Meyer” ( Part I )( Part II )

“A simple solution for transportation equity: bike lanes.”

Back in 2011 I wrote about cycling being a superior route to transportation equity. I even cited Victorian England since women’s liberation evidently was tied to the advent of modern cycling (could ride to work and be independent of typically male-dominated transit such as horse or carriage).

Now I see a fascinating new report on cycling in Chicago that says police are issuing more tickets against Black riders to prevent them from having mobility on bikes, despite data showing the majority of accidents are white riders.

In Chicago, cyclists in Black neighborhoods are over-policed and under-protected.

A simple solution for transportation equity: bike lanes.

Tesla More Likely to Run Over Black People

Optical illusions using black lines are more likely to stop a “driverless” car than an actual human.

Recently I wrote about reckless public road tests by Tesla owners who are intentionally training their “driverless” system to disregard red lights.

“All Lives Matter” wants everyone to know that if a Tesla says it sees a red light he has not been able to force it to drive through anyway.

Keep in mind that “All Lives Matter” is a slogan of violent social media terror campaigns that have been trying to convince American drivers to drive through crowds, run over people to kill them and silence speech.

Here we see not only Tesla safety engineering failing, but that a YouTube discussion of failures is being linked to a domestic terror campaign that violates traffic laws, specifically ignoring orders to stop.

It’s kind of similar to the 2010 post I wrote how red light cameras increase the number of crashes), but also there clearly is a different ingredient — intent to use a vehicle as a weapon in racist violence.

This is a very significant change to me as it paints a fairly clear vision of a very dangerous future where cars are used to target and kill people, like any missile used for assassination, while trying to blame an “algorithm”.

So what if BOTH are to blame, racist algorithm AND driver?

In the past a cement truck would typically be used as extrajudicial assassinations to run over someone (because operational norms for large trucks are so wide they typically can get away with killing anyone nearby), or a vehicle itself would be tampered to kill the occupants.

A journalist in Saudi Arabia was tucked into a typical small vehicle for a ride to a controversial site, for example, and was killed by a heavy truck. Did you hear about it? Of course not, because the whole point is killing innocent random people is what big heavy machinery is expected to do on a constant basis.

Perhaps that helps explain Saudi Arabia’s major involvement in funding Uber’s “driverless” program, which infamously ran red lights in San Francisco and then very predictably killed a woman crossing a road in Arizona?

Saudi Arabia’s wealth fund is Uber’s fifth-largest investor, having provided $3.5 billion to the rideshare company, not including whatever money the Saudis indirectly put into Uber through major investor Softbank’s Vision Fund. Yasir Othman Al-Rumayyan, the managing director of Saudi Arabia’s wealth fund, sits on Uber’s board. Saudi Crown Prince Mohammed bin Salman is the fund’s chairman.

This post also could have been titled “Uber More Likely to Harm Women” except Uber backed off “driverless” after a very public disaster in 2018 (had they used a cement truck instead nobody would have blinked, if you see what I’m saying)… and tried to pin the whole thing on a woman “driving” a “driverless” car.

Compare that to Tesla. They had a pedestrian fatality the exact same month in 2018 and went the exact opposite way from Uber; CEO heavily ramping their troubled “driverless” program even more (causing far more deaths as a result, almost like they wanted high death rates to be a norm), cynically deleting their PR department.

Source: tesladeaths.com
Source: My presentation at MindTheSec 2021

Outrage does happen, even for the biggest trucks, so Tesla CEO’s cruel campaigns to normalize his machines killing people shouldn’t in any way be seen as reasonable.

Elon Musk reminds us all that ‘a bunch of people will probably die’…

He may as well be explaining why poor black workers aren’t returning from the mines, while their white owners are still profiting.

There are cases like when a driver in mountains was convicted of manslaughter for not using a formal off-ramp and instead choosing to plow into cars killing many people.

Hopefully that exception is self-explanatory for why trucks have been so effective for assassinations, let alone domestic terrorism, and how “driverless” will make targeted deaths far more commonplace by allowing a huge jump in untargeted ones without any accountability.

Source: tesladeaths.com

With that in mind, American cars have been documented widely as far more likely to drive into black people due to systemic racism and structural bias in transit planning.

Operational norms for a car in America have been systemically and carefully curated to kill a person of “lower” or “lesser” standing with impunity — politically prioritizing white property rights over non-white human rights.

Therefore self-driving cars are easily predicted to ingest data on this killer “privilege” and make it far worse (more volume, more velocity, more variety) by amplifying racist drivers with inherently racist tools.

Tesla is without question the worst engineered vehicle the road today (capabilities far below what is advertised) coupled with overtly irresponsible drivers, which can only result on tragedy at increasing scale.

I’ve even posted on this blog videos of the most recent (version 10) Tesla “driverless” technology nearly running over pedestrians in crosswalks.

All that being said, a new report shows exactly how the latest “driverless” engineers may be delivering just another easily predictable chapter in a long history of societal racism of American transit… by making their products inherently unsafe for Blacks.

We give evidence that standard models for the task of object detection, trained on standard datasets, appear to exhibit higher precision on lower Fitzpatrick skin types than higher skin types. This behavior appears on large images of pedestrians, and even grows when we remove occluded pedestrians. Both of these cases (small pedestrians and occluded pedestrians) are known difficult cases for object detectors, so even on the relatively “easy” subset of pedestrian examples, we observe this predictive inequity.

It’s interesting to note in the study just how significant a small difference of shades are for decisions about “safe” paths.

Anyone familiar with cows might see how this relates to a science of painting dark and light lines to control movements, as I hinted at the start of this post.

Cows see white lines (fake grids) as an obstacle in their path, begging the question why ranchers ever installed expensive metal grids in the first place.

Or my improved crosswalk design, meant to stop cars from killing so many children in the streets, could easily be adapted to a similar message of “Black Lives Matter”: just mix painted lines with retractable physical bollards.

Based on the Quebec initiative. my draft for the kind of mechanical pop-up drivers need to see when they approach any pedestrian crossing area.

Related posts:

“Racism at Tesla Might Explain Why Their ‘Autopilot’ Crashes So Often”

American Pedestrians Killed Disproportionately by Race

Pedestrian Kill Bills Are Racist

Jaywalking is a Fantasy Crime

and one even from 2013

American Fear of a Non-Motorized Planet