A Simple Reason Why Tesla Keeps Crashing into Police Cars

Tesla Deaths: 207
Tesla Autopilot Deaths: 10
Ford Pinto Deaths: 27

Today at “Tesla AI Day” their engineering team on the main stage said the following, and I quote:

…we haven’t done too much continuous learning. We train the system once, fine tune it a few times and that sort of goes into the car. We need something stable that we can evaluate extensively and then we think that that is good and that goes into cars. So we don’t do too much learning on the spot or continuous learning…

That’s a huge reveal by Tesla, since it proves a RAND report right.

Autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their reliability in terms of fatalities and injuries. Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles—an impossible proposition if the aim is to demonstrate their performance prior to releasing them on the roads for consumer use.

On this trajectory it could take centuries before Tesla would achieve even a basic level of driving competency.

Think especially in terms of Tesla saying “we need something stable” because any hardware or software change in the “learning” system sets them backwards (which it definitely does).

The hundreds of years estimate gets longer and longer the more they push “newness” onto the road. 500 years is not an unreasonable estimate as when to expect improvement…

This brings me all the way back to the first fatality caused by Tesla “autopilot” in January 2016.

A car traveling at high speed drove without any braking straight into the back of a high-visibility service vehicle with flashing safety lights.

Source: The Sun

Did you hear about it?

I have to ask because you should be aware of the “experts” who say patently false things like this:

…when the first person was killed using the Tesla autopilot in Florida, the truck [hit by the Tesla] was perpendicular to the direction the motion. The training did not have those images at all, so therefore the pattern matcher did not recognize that pattern.

No, Florida was not the first crash.

No, the first fatal “autopilot” crash was NOT perpendicular motion. It was running into the back of a safety vehicle despite flashing lights.

No, the series of fatal perpendicular motion crashes are not about the failure to recognize a pattern or failure of training (at least three so far, all using different hardware and software).

In fact, the Florida crash was fatal *because* Tesla recognized a pattern (thought it saw an overhead sign common in California highways near Tesla HQ).

After nine seconds of 70mph downhill the Tesla in Florida shifted lanes left to right in an attempt to drive under a trailer in between the wheels — that is obviously *because* of pattern recognition.

Let me put it this way: people probably would have survived their Tesla crash, if only the car had been blind instead or if they had “autopilot” disabled.

Getting little details about Tesla like these right is super important because there have been more and more crashes very much like the actual first one FIVE YEARS AGO.

Talk about “we don’t do too much learning”!

Bottom line is the more Tesla on the road, despite a commonly stated fallacy of “learning” or “training”, the more crashes.

Thus, DO NOT believe anyone (expert or otherwise) who says “learning” is the answer to safety, unless by learning they mean regulators start learning how to hold Elon Musk accountable for lying about safety.

Back to the article about the crash in January 2016, it also makes a glaring error.

Company founder Elon Musk said the firm was in the process of making improvements to its auto pilot system aimed at dramatically reducing the number of crashes blighting the model S.

Elon Musk did not instigate the company. Technically Martin Eberhard and Marc Tarpenning founded it in July 2003.

Musk joined them and invested his $6 million, basically stolen from PayPal, then used lies and exaggerations to push out the actual founders.

That’s an important point because it sets some context for why Tesla hasn’t improved, lacking its original idea people.

Fast forward to today and the number of Tesla “autopilot” crashes reported by the news has only increased dramatically, completely opposite from that “process” Musk claimed he was launching.

Federal safety regulators are investigating at least 11 accidents involving Tesla cars using Autopilot or other self-driving features that crashed into emergency vehicles when coming upon the scene of an earlier crash.

In 2021 Federal safety regulators are investigating a series of crashes that go right back to the very first 2016 crash conditions.

See something fishy in that FIVE YEAR timeline?

We have seen a lot of accidents and very little investigation for a system that is supposedly “learning”.

The difference now seems to be that regulators realized what I’ve been saying here and in my presentations (very out loud) since 2016: Tesla’s negligence means more people being killed who aren’t inside a Tesla.

…“buyer beware” defense has been voiced loudly by Tesla’s defenders after previous crashes have grabbed headlines, such as one in Texas earlier this year in which two individuals inside a Tesla were incinerated (neither was reportedly in the driver’s seat). It’s impossible to claim consent exists for a first responder—or for anyone else struck by a Tesla driver.

This is a big deal, because it breaks with auto safety’s traditional orientation toward vehicle occupants.

So I was right all this time?

The data shows, as I predicted, that Tesla isn’t actually improving at avoiding crashes over time and just getting worse and worse.

The more cars they put on the road, the more tragedies. That is how scams work, which is also how I was able to reliably predict where we are now.

I have so many sad examples. Tesla data is not pretty.

Tesla’s Driver Fatality Rate is more than Triple that of Luxury Cars (and likely even higher)

And the causality is ugly too.

Autosteer is actually associated with an increase in the odds ratio of airbag deployment by more than a factor of 2.4

Tesla’s “autopilot” causes more crashes.

If you want to prove me (really the data) wrong, here’s an open job for you.

Source: LinkedIn

The explanation for such a devolution in the data is unfortunately rather simple.

Tesla “learning” is untrue. “Autopilot” is untrue. It’s all been a scam.

We knew this as soon as their “1.0” became “2.0” and the “autopilot” capabilities were far worse, even crossing double-yellow lines.

In reality, the “new version” reflected Tesla going backwards and losing a serious ethics dispute with engineers building the actual “autopilot”:

The head of driver-assistance system maker MobilEye has said that the company ended its relationship with Tesla because the firm is “pushing the envelope in terms of safety.” […] Given how instrumental MobilEye was in developing Autopilot, it’s a surprise to see Sashua effectively talk down his company’s product.

Talk down? I’m sorry, that’s Sashua calling Elon Musk a dangerous exaggerator and liar. It should not have been any surprise.

There is no rational reason to believe any update you are getting from Tesla will be making anyone safer, as it could actually be making us all more at risk of injury or death.

A new IEEE paper explains:

These results dramatically illustrate that testing a single car, or even a single version of deployed software, is not likely to reveal serious deficiencies. Waiting until after new autonomous software has been deployed find flaws can be deadly and can be avoided by adaptable regulatory processes. The recent series of fatal Tesla crashes underscores this issue. It may be that any transportation system (or any safety-critical system) with embedded artificial intelligence should undergo a much more stringent certification process across numerous platforms and software versions before it should be released for widespread deployment.

Again, after the first fatality in January 2016 the talk track was a “dramatically reducing the number of crashes” and we’ve seen anything but that.

In one case a Tesla was pulled over by police because they didn’t see anyone in the driver seat. After being pulled over the Tesla started moving again and crashed into the stopped police car.

The lies and inability to really learn are a continuous disappointment, of course, to those who want to believe machines are magic and things naturally get better over time if large amounts of money and ego are involved.

…it’s obviously a very hard problem, and no one is expecting Tesla to solve it any time soon. That’s why it’s so confusing that Musk continues to make promises the company can’t keep. Perhaps it’s meant to create hype and anticipation, but really it’s an unforced error that does nothing but erodes trust and credibility.

Even Tesla’s head of autopilot software, CJ Moore, has made it clear that Musk’s claim about self-driving capabilities “does not match engineering reality.” In addition, in a memo to California’s Department of Motor Vehicles, Tesla’s general counsel said that “neither Autopilot nor FSD Capability is an autonomous system, and currently no comprising feature, whether singularly or collectively, is autonomous or makes our vehicles autonomous.”

If you have studied dictatorships run by alleged serial liars like Elon Musk, you might recognize all the hallmarks of why things fall apart instead of improving. Federal regulation is long overdue as many deaths could have been prevented.

Let me conclude by saying we’ve had the answers to these problems for centuries. For example we need to stop calling the rules some kind of edge case.

The rule is to stop at a stop sign, yet companies like Tesla fail at this and instead try to use propaganda to convince people that their failures are are edge cases because we don’t see them very often.

Edge is not defined solely by frequency. You can’t drive if you can’t stop at a stop sign regardless of how far away it is from your starting point.

You could start using a camera at birth to record everything such that a machine has seen everything you have. Teenage drivers today could have been doing this for the past 10 years (same age as Tesla) but that still wouldn’t really help as we see the first fatality from “autopilot” was a high visibility service vehicle with flashing lights in January 2016.

Learning is not the problem, given Tesla has had 5 years of bazillions of combined miles and still can’t tell a police car from an open road.

In other words it’s not that the car has or hasn’t seen all those things, it is that driving means training in a way that doesn’t require the data set expansion.

Wollstonecraft gave us this philosophy in the 1790s when she said women and blacks are equal to white men. She clearly could see the future of harm avoidance without the data set.

Tesla keeps crashing in a similar way that a racist white police officer won’t stop assaulting innocent black people. A racist does NOT have to meet every black person in the world to stop blindly causing harm, and yet some racists never learn no matter how many people they meet.

Let’s be honest here. There is NO necessity to “see all there is to see” to be a safe driver.

This has been debunked since at least the late 1700s by philosophers, and repeatedly proven with basic science. Nobody in their right mind believes you have to sample every molecule in the world to predict whether it’s going to rain.

I say that in all sincerity as it’s a truism of science. Yet the “driverless” industry has been pouring money into dead-end work, trying to prove science wrong by paying people $1/hour to classify every raindrop.

“For example, if it’s drizzling, all the cameras are so strong that they can capture the tiniest water drop in the atmosphere.” In a category called “atmospherics,” workers may be asked to label each individual drop of water so the cars don’t mistake them for obstacles.

Such a mindset is an artifact of people trying to solve a problem the wrong way. And it should be increasingly obvious it is not how the problem actually will be solved.

If it were true that a good driver had to learn a large number of rare events to become experienced, it would not be possible for any human to be classified as a good driver. Humans barely get to a million miles in their lifetime and even can reach “experienced” status before 100,000 miles. Human drivers literally prove to achieve good driver level does NOT require experience with a large number of rare events.

Again, this has been known and proven philosophy since the 1700s. Done and dusted.

More proof is that Waymo claims to have around 20 million “autonomous” miles yet can’t deny they are nowhere near ready for wide deployment.

The way things are being done (especially by Tesla, yet also basically everyone else) is not actually yielding utility in transportation safety (or efficiency). As a basic exercise in economics there are FAR more useful ways to spend the enormous amount of money, talent, resources etc. being devoted to such a broken status quo. Instead of Tesla, can you imagine if all that money had been spend on better rail?


Update August 30: More statistics in a new post, as I explore why Toyota cancelled their autonomous driving project after just one injury.

Afghanistan Lessons: No Good Exits From Losing. Was There a Way to Win?

I’m not convinced yet that there was a good way for the US to exit Afghanistan. Part of saying that the exit has been a disaster is to project or predict some better way to go about it.

Historians of the future will undoubtedly debate whether any good exit existed at all, and I for one am not seeing any evidence of it yet.

Think of it like a car accident. In the first two minutes as lanes are merging, many options in a decision tree present themselves with several good outcomes. Yet in the last seconds before slamming into each other, it’s just a matter of stop loss.

The only options left are all bad ones. This isn’t to say better options didn’t exist earlier, just that the point at which sudden and abrupt movements had to be made they all look bad.

With that in mind, after reading the story of US Army Special Forces officer Jim Gant I’m pretty sure he was exactly right about how to win the war. And not for the most obvious reasons. This makes perfect sense to me, for example:

…decentralized effort focused on empowering Afghanistan’s tribes rather than one that bolstered a corrupt central government…

That’s tapping right into the core of transitions we’re seeing around the world. Gant was on to something much, much bigger than Afghanistan.

It’s a narrative we even see played out regularly in the American news of its domestic tribes pushing for more “freedom” (read as control) and less oversight.

Just to be clear, flying the Confederate battle flag is tribalism. A group calling itself “Proud Boys” is tribalism. Perhaps it then has to be said that in no way would empowering these tribes in America turn out well for America.

And there’s the rub. Which tribes get to be magically empowered through foreign military intervention and why? Who decides and how? This was some of the (admittedly very naive and weak) foundation of my masters thesis work decades ago.

What jumped out at me in Gant’s particular study of the problem was something completely unexpected. (Full report in PDF: “One Tribe at a Time – A Strategy for Success in Afghanistan“)

Let’s go back in history for a minute.

In the 1800s President Grant required that his wife be buried along side him, and in doing so he was refused his rightful place in a US military cemetery.

The best general and best president in American history was literally denied proper burial rights only because he cared so deeply for his life partner.

That is why Grant’s massively impressive tomb instead is conspicuously in the heart of NYC.

Gant’s story had a interestingly similar tone, since the woman he married joined him in the field. He brought her close enough that the US military wanted Gant out. Somehow that seems like a giant clue, Gant might have been so far ahead, really understood victory in a way Grant did too, that his ideas seemed so good and deserve much more attention.

He perhaps could have even won the war.

The bureaucratic hurdles he was up against were his downfall.

Douglas Lute, “a three-star Army general who served as the White House’s Afghan war czar” under former Presidents George W. Bush and Barack Obama, told interviewers “we were devoid of a fundamental understanding of Afghanistan – we didn’t know what we were doing.”

“What are we trying to do here? We didn’t have the foggiest notion of what we were undertaking,” Lute said in 2015, according to the Post.

In another example, Jeffrey Eggers, a retired Navy SEAL and White House staffer for Bush and Obama, bemoaned the cost of the war to interviewers, asking, “What did we get for this $1 trillion effort? Was it worth $1 trillion?” the Post said.

There’s a big disconnect between spend and value here, especially when you look at the transition from authorizing “harass” tactic under Carter, to full-bore support of extremist right-wing religious militants under Reagan and Bush.

But is the right answer to shut off spending or to increase value from that spend? Are either options realistic?

It brings to mind retrospectives on an unsustainable cost of the Vietnam War, such as this one:

When you stop to think about it if you have $30M orbiting reconnaissance aircraft to transmit signals, and $20M command post to call in four $10M fighters to assault a convoy of five $5000 trucks with $2000 worth of rice, it’s easy to see that’s not cost-effective. This is a self-inflicted wound… a losing proposition…

That’s only a little bit ironic given Brzezinski in 1980 wanted the US to get into Afghanistan to make it into a Vietnam War for the USSR; a form of payback that would create political quagmire too expensive for the Soviets to sustain militarily.

Saying in 1980 that Kabul should be the Saigon of the USSR has literally turned into Russia saying Kabul should be a repeat of Saigon for America; don’t forget Putin cut his teeth in the KGB during the 1980s.

However, despite all these interesting and useful references to Vietnam, Gant’s predicament reminds me much more of the American Civil War.

I’m especially thinking about Lincoln’s decision to expeditiously promote Grant right to the very top of decisions.

When I read it was a West Point graduate who petitioned to have Gant removed from his post — a hint at patronage instead of competence as a deciding factor — it reminded me of Grant as well. Grant had success navigating West Point while refusing to play into its patronage system, such that if America had depended on the men above Grant to win the Civil War it’s not clear they could have done it without him.

In that context, although I have limited Gant background, we have to wonder what would have happened if Gant had been promoted for his ideas and skill instead of kicked out on some technicality.

Did Twitter Just Put the Taliban in Power?

I love this sentence in a new explainer for how hard it has been for social platforms to kick the Taliban out.

We’ve seen revolutions in the age of social media; we’ve seen coups. But we haven’t seen a case where an internal insurgency successfully co-opts a state and seeks to take over that state’s functions.

It begs the question of whether the 2016 Presidential election was a coup or an insurgency that co-opted the state.

My argument has been that 2016 qualifies as a coup (which fizzled and fumbled for years until flaming out in 2021), yet here I’m seeing evidence that I may have to update that assessment.

Tesla Puts Safety Last, Undermines Quality to Rush Production

You may remember a long time ago how the CEO of Tesla was accused by the company founder of some pretty serious knowledge and ethics gaps (calling him an “exaggerator and a liar“):

…claimed on a number of occasions to have degrees in both business and physics and to have briefly attended Stanford University, the suit alleges that Musk only has a degree in economics…

Lately those accusations have seemed like important foreshadowing to the CEO’s well documented lies and exaggerations.

According to a new book it’s also an insight into Tesla’s growing struggle to handle the truth — allegedly their manufacturing “approach” all along has been sacrificing safety and delaying fixes in order to push harmful exaggerations onto the road.

Musk’s approach to many manufacturing issues was, and still appears to be, keeping the assembly line moving while line problems are being fixed. He’s not a fan of the Toyota method, where a worker can stop the line until the problem is solved. He’s all about the volume.

That may be one reason why the quality of Teslas is so variable — why buying one can feel like a crapshoot. Some owners report their car is perfect; some say they were sold a piece of junk. (Including Kristen Wiig and Avi Rothman.)

In fact, Toyota ended a partnership with Tesla over such issues. “Musk was willing to let some quality issues slide if addressing them meant slowing down their schedule…,” Higgins reports. “Tesla was building the airplane as Musk was heading down the runway for takeoff.”

Of course Toyota ended that partnership. Tesla puts safety last. It’s a repeat story for many companies who realize far too late that they’re being pulled into a scam.

Getting more product out in a variable state of quality is a certain recipe for disaster, which is exactly what the data has been showing (Tesla’s safety record rapidly declines with the more cars they release).

  • Tesla ‘Autopilot’ leads to *more* crashes than regular driving
  • Tesla Model S has higher insurance losses than other large luxury cars” (higher frequency and severity)
  • Tesla has fire deaths at 4x the rate of other vehicles
  • Teslas have 2-4x more non-crash fires than the average car, and incur damages up to 7x higher
  • Teslas have triple the driver deaths of comparably priced luxury vehicles
  • Teslas crash twice as often as regular cars
  • Tesla Deaths as of 7/7/2021: 201

A “crapshoot” is not how safety regulation is supposed to work, but it’s a good description of what owners should be thinking about as their brand new Tesla battery rushed to market spontaneously combusts and their door locks fail to release — a predictable death trap.