Tesla Cybertruck Owner Asks for Advice After FSD Tried to Kill Him

Elon Musk recently boasted that he will solve poverty, end world hunger and achieve zero Tesla crashes in 2025.

Haha, no really, he actually said he would achieve zero Tesla crashes this year. That’s a real Elon Musk prediction for 2025.

Of course we immediately have seen evidence to the contrary, with Tesla crashing into each other, crashing into a pole, and now even trying to crash into head-on traffic.

“I did have a big scare the other day and wanted to share/get advice,” the owner recalled. “Backstory to the video, it has trouble turning into my driveway, it always wants to turn into the plant nursery beside my driveway (even though it has it right in the map) and I was recording it to show my dad the issue.”

“There was a truck coming from the other direction and it tried to turn right as the the [sic] truck was passing,” the post reads. “If I wouldn’t have snatched the wheel it could have been a head on collision.”

“This is my first Tesla and I’m worried about using FSD again…”

Yeah, yet another FSD user worried they will be killed by it just weeks after Elon Musk said Tesla would no longer have any crashes at all.

Tesla DUI Rate Surges 118% in 2024 Study

New data and analysis from LendingTree reveals an alarming surge in Tesla crashes, raising serious questions about the intersection of fraudulent autonomous driving claims and impaired drivers.

To begin, Tesla is ranked the worst crash data car in nine states, corresponding to it being the worst where it sells the most.

Tesla’s rates per 1,000 drivers:

  1. Total Incidents: 36.94 (up from 31.13)
    • This includes ALL types of incidents: accidents, DUIs, speeding, and citations
    • This makes Tesla the worst overall brand for total incidents
  2. Crashes Only: 26.67 (up from 23.54)
    • This is just crashes (some say accidents)
    • Tesla also ranks worst for crashes specifically

Or to put it simply:

  • Tesla = worst overall incident rate at 36.94 per 1,000 drivers
  • Tesla = worst crash rate at 26.67 per 1,000 drivers
  • Tesla = suddenly third-worst DUI rate at 2.23 per 1,000 drivers

Tesla DUI Numbers Most Shocking

Tesla’s rate related to DUI stands out alone among all brands. It skyrocketed by 118% in just one year, climbing from 1.02 to 2.23 incidents per 1,000 drivers.

This dramatic increase comes against a backdrop of controversial messaging about Tesla’s autonomous capabilities, including CEO statements since 2016 suggesting drivers can remove their hands from the wheel or even sleep while the car is in motion. Notably, after he promised by 2017 to deliver sleep-at-the-wheel and failed, he just kept repeating the lie as if unaccountable for words.

Autonomous Driving Lies

Suffice it to say Tesla’s CEO has made numerous public statements suggesting drivers are safer letting their vehicle alone handle complex driving situations, such as being incapacitated. His claims sharply contrast with Tesla’s official statements, which require drivers to remain alert and maintain control of the vehicle at all times.

The combination of these two, known within disinformation analysis as toxic shock, is the worst of all possible options. An official statement is far less likely to be heeded if the CEO contradicts it openly. If there was no official warning it would be better, because the CEO would be believed less.

Killer Lies

The combination of messaging about autonomous capabilities and rising DUI rates suggests a potentially lethal misconception: that Tesla’s autonomous features make it safer to drive while impaired. This dangerous assumption ignores critical facts:

  1. Tesla’s autonomous features require an alert, sober driver
  2. No current autonomous system is approved for operation by an impaired driver
  3. Autopilot and Full Self-Driving features are driver assistance tools, not replacements, and dangerously inferior (e.g. camera-only) compared to most other car brands

In case it’s not obvious enough, what Elon Musk says is absolutely false. His false statements encourage impaired drivers to take unnecessary and illegal risks.

Public Safety Implications

Tesla owners face potentially higher insurance premiums due to rising incident rates, affecting everyone. DUI charges apply regardless of autonomous feature usage, and legal experts warn that autopilot usage doesn’t provide any defense in DUI cases.

This means, despite fancy technological double-speak in public statements, the fundamental rules of safe driving remain in direct contradiction to the Tesla CEO’s wildly unaccountable and untrustworthy statements.

  • Never drive under the influence
  • Always maintain full attention on the road
  • Don’t rely on autonomous features for impaired driving
  • Arrange alternative transportation when drinking

As Tesla’s DUI rates continue to climb, it’s crucial to remember that for the foreseeable future no amount of technology makes impaired driving safe. I’m not a lawyer but they all tell me Elon Musk keeps the money for lies because he knows the responsibility for safe operation remains with a driver, regardless of his shady car salesman public statements suggesting otherwise.

Tesla Crashes Through FAA Fence, Stopped by Tree

Again, questions are being asked about Tesla “driverless” due to a early morning crash into a tree at high speed, after ignoring a FAA facility fence.

INCIDENT DATE/TIME: 2-8-25 6:20 am
LOCATION: 9175 Kearny Villa Rd (FAA)
AREA/CITY: San Diego
DETAILS:
— The male driver of the Tesla was driving northbound on Kearny Villa Rd when he lost control of the vehicle.
— He crashed through the barriers and fencing that protects the FAA building.

DeepSeek Jailbreaks and Power as Geopolitical Gaming

Coverage of AI safety testing reveals a calculated blind spot in how we evaluate AI systems – one that prioritizes geopolitical narratives over substantive ethical analysis.

A prime example is the recent reporting on DeepSeek’s R1 model:

DeepSeek’s R1 model has been identified as significantly more vulnerable to jailbreaking than models developed by OpenAI, Google, and Anthropic, according to testing conducted by AI security firms and the Wall Street Journal. Researchers were able to manipulate R1 to produce harmful content, raising concerns about its security measures.

At first glance, this seems like straightforward security research. But dig deeper, and we find a web of contradictions in how we discuss AI safety, particularly when it comes to Chinese versus Western AI companies.

The same article notes that “Unlike many Western AI models, DeepSeek’s R1 is open source, allowing developers to modify its code.

This is presented as a security concern, yet in other contexts, we champion open-source software and the right to modify technology as fundamental digital freedoms. When Western companies lock down their AI models, we often criticize them for concentrating power and limiting user autonomy. Even more to the point many of the most prominent open source models are actually from Western organizations! Pythia (Eleuther AI), OLMo (AI2), Amber and CrystalCoder (LLM360), T5 (Google), Bloom (BigScience), Starcoder2 (BigCode), and Falcon (TII) to name a few.

Don’t accept an article’s framing of open source as “unlike many Western AI” without thinking deeply why they would say such a thing. It reveals how even basic facts about model openness and accessibility are mischaracterized to spin a “China bad” narrative.

Consider this quote:

Despite basic safety mechanisms, DeepSeek’s R1 was susceptible to simple jailbreak techniques. In controlled experiments, the model provided plans for a bioweapon attack, crafted phishing emails with malware, and generated a manifesto containing antisemitic content.

The researchers focus on dramatic but relatively rare potential harms while overlooking systemic issues built into AI platforms by design. We’re more concerned about the theoretical possibility of a jailbroken model generating harmful content than we are about documented cases of AI systems causing real harm through their intended functions – from hate speech and influencing suicides through chatbot interactions to autonomous vehicle accidents.

The term “jailbreak” itself deserves scrutiny. In other contexts, jailbreaking is often seen as a legitimate tool for users to reclaim control over their technology. The right-to-repair movement, for instance, argues that users should have the ability to modify and fix their devices. Why do we suddenly abandon this framework when discussing AI?

DeepSeek was among the 17 Chinese firms that signed an AI safety commitment with a Chinese government ministry in late 2024, pledging to conduct safety testing. In contrast, the US currently has no national AI safety regulations.

The article presents a concerning lack of safety measures, while simultaneously presenting safety commitments, while criticizing the model for being too easily modified to ignore safety commitments. This head-spinning series of contradictions reveals how geopolitical biases can distort our analysis of AI safety.

We need to move beyond simplistic Goldilocks narratives about AI safety that automatically frame Western choices as inherently good security measures while Chinese choices can only be either too restrictive or too permissive. Instead, we should evaluate AI systems based on:

  1. Documented versus hypothetical harms
  2. Whether safety measures concentrate or distribute power
  3. The balance between user autonomy and preventing harm
  4. The actual impact on human wellbeing, regardless of the system’s origin

The criticism that Chinese AI companies engage in speech suppression is valid and important. However, we undermine this critique when we simultaneously criticize their systems for being too open to modification. This inconsistency suggests our analysis is being driven more by geopolitical assumptions than by rigorous ethical principles.

As AI systems become more prevalent, we need a more nuanced framework for evaluating their safety – one that considers both individual and systemic harms, that acknowledges the legitimacy of user control while preventing documented harms, and that can rise above geopolitical biases to focus on actual impacts on human wellbeing.

The current discourse around AI safety often obscures more than it reveals. By recognizing and moving past these contradictions, we can develop more effective and equitable approaches to ensuring AI systems benefit rather than harm society.