Tesla has made a series of catastrophic management decisions that have rendered its “automation” hardware and software the worst in the industry.
Removing radar and lidar sensors to leave only low grade cameras and repeatedly forcing qualified staff into dead-ends then replacing them with entry-level ones who wouldn’t disagree with their CEO… shouldn’t have been legal for any company regulated on public road safety.
“Lord of the flies” might be the best way to describe a “balls to the wall” fantasy of rugged individualism behind an unregulated yet centrally dictated robot army of technocratic “autonomy”.
Now even the most loyal Tesla investors with the product, who have sunk their entire future and personal safety into such a fraud, are forced to reveal a desperate and declining status of the company.
According to a Reddit forum chat, FSD 12.4 is unusable because so obviously unsafe.
This idea “they will fix it soon” is from the same user account that just posted a belief that Tesla’s vaporware “robotaxi” strategy is real. They believe, yet they also can’t believe, which is behavior typical of advance fee fraud victims.
Musk’s erratic leadership played a role in the unpolished releases of its Autopilot and so-called Full Self-Driving features, with engineers forced to work at a breakneck pace to develop software and push it to the public before it was ready. More worryingly, some former Tesla employees say that even today, the software isn’t safe for public road use, with a former test operator going on record saying that internally, the company is “nowhere close” to having a finished product.
Notably, the Tesla software continues to “veer” abruptly on road markings, which seems related to its alarmingly high rate of fatalities.
Big jump in the wrong direction. Removed constraints that prevented deaths. Training to cause harms.
Here’s a simple explanation of the rapid decline of Tesla engineering through expensive pivots, showing more red flags than a Chinese military parade:
First dead end? AI trained on images. They discovered what everyone knew, that the more a big neural network ingested the less it improved. It made catastrophic mistakes and people died.
Restart and second dead end? A whole new real-world dataset for AI training on video. After doing nearly 500 KLOC (thousand lines of coke code) they discovered what everyone knew, Bentham’s philosophy of the minutely orchestrated world was impossibly hard. Faster was never fast enough, while complex was never complex enough. It made catastrophic mistakes and people died.
Restart and soon to be third dead end? An opaque box they can’t manage and don’t understand themselves is being fed everything they can find. An entirely new dataset for a neural net depends on thoughts and prayers, because they sure hope that works.
It doesn’t.
This is not improvement. This is not evolution. They are throwing away everything and restarting almost as soon as they get it to production. This is privilege, an emperor with no clothes displaying sheer incompetence by constantly running away from past decisions. The longer the fraud of FSD (Lord of the Flies) continues unregulated, the worse it gets, increasing threats to society.
Update three days later: View from behind the wheel. This is NOT good.