In May of this year Tesla bizarrely announced against common sense that it would remove radar from its “vision” system for autonomous driving.
It’s a dumb move by a car company that only will cause more crashes and deaths.
Safety is at the core of our design and engineering decisions. In 2021, we began our transition to Tesla Vision by removing radar…
Their announcement reads to me like disinformation, an intentional misrepresentation. When they say safety is at their core, they actually mean a lack of it.
Now Arbe is mentioned in an interesting new article with some subtle shade:
…typical radar systems may struggle to properly identify objects passing under a bridge or overhead signage…
This needs to be seen in context of the Florida crash that killed Tesla driver Jeremy Brown. Remember the big news in 2016 for the second Tesla fatality due to its “autopilot”?
The engineers have two main theories, the people said. Either the car’s cameras and radar failed to spot a crossing tractor-trailer. Or the cameras didn’t see the rig and the car’s computer thought the radar signal was false, possibly from an overpass or sign.
Tesla officials disclosed these theories to U.S. Senate Commerce Committee staff members during an hour-long meeting…
It turns out Tesla safety “trained” mainly on roads around California where overhead signs look just like trailers crossing the road, and then baked that into a very rudimentary system.
They repeatedly crash into the side of trailers because they think it’s a stationary object above the road, instead of properly seeing a danger directly in the way.
Even more to that point, when a trailer starts moving left to right (perpendicular to oncoming Tesla path) the Tesla tracks the trailer and also shifts to the right to drive underneath as if a straight road ahead instead was suddenly curving to the right.
And now Tesla has admitted publicly its engineers do not use continuous learning, or really any field-learning because it makes their jobs harder, which explains why it keeps making that same basic fatal error over and over without improvement.
…we haven’t done too much continuous learning. We train the system once, fine tune it a few times and that sort of goes into the car. We need something stable that we can evaluate extensively and then we think that that is good and that goes into cars. So we don’t do too much learning on the spot or continuous learning…
Here’s even more context on that point, Tesla’s bombastic and serial liar CEO has gone 180 degrees from claiming to be the best at learning safety to being unable to learn, as I explained in my recent security conference presentation:
Ford Pinto Deaths? 27
Tesla Deaths: 207
Tesla Autopilot Deaths: 10
Update May 2024 (three years later):
Tesla Deaths: 523
Tesla Autopilot Deaths: 44