Veered Tesla Crash on CA Highway 1 Kills Two

I know Bean Hollow Road on Highway 1 very well. I’ve been there countless times, even on the motorcycle.

A terrible crash was reported Tuesday evening. A southbound 2022 Tesla collided at 6:39PM head-on into northbound traffic, spinning itself and another vehicle out of control down an east-side embankment into a lagoon.

Lagoon down an embankment south of Bean Hollow, north of Bean Hollow Road, on Highway 1. Source: Google Maps StreetView

The Tesla was traveling so fast it became completely submerged under 4 feet of water into thick mud.

Tesla slowly being winched from muddy lagoon along Highway 1. Source: KTVU

Here you can see the Tesla was recovered near the middle area of the lagoon, after rolling down the embankment just before a blind curve to the right (note large hill to the left, which obscures the approaching curve when southbound).

CHP stand out among the other first responders. Source: Dean Smith, ABC7 Bay Area

CHP claim they are unable yet to place blame on who “veered” over the double-yellow line, straight into opposing traffic, despite nearly a decade of Tesla crashes that have similar indicators as this one.

“At this point,” he said, “we don’t know exactly what caused it, or which vehicle first crossed over the yellow line.”

Ok, but as a northbound car was smashed far east into that lagoon, it seems safe to start with an assumption a southbound car drifted across the lines and hit at high speed before it too went eastward into the lagoon.

Source: Google Map

Here’s the view of the front left side of the Tesla and totally compromised cage, giving a sense of the impact force under a posted speed limit of 55 mph. Looks more like 70 mph and rolling end over end.

Source: Dean Smith, ABC7 Bay Area

The catastrophic level of damage and huge distance the Tesla flew from the road into deep mud suggests high speed approaching a blind right curve, which should be easy for investigators to determine. It may be the Tesla didn’t see/make the slight curve and ended up in the other lane as a northbound car suddenly came around the hill.

Given 2022 Tesla tend to be less safe than earlier models, it will be interesting to find out whether dangerous “crap” engineering was responsible. Is it yet another camera blindness case, driving straight instead of turning? Or maybe Tesla driver was hands on wheel looking backwards at the lovely Bean Hollow beach while steering too far left.

It just seems rather implausible for a northbound car to oversteer its mass west over the double yellow yet send both cars flying so far to the east. Also it proves, yet again, that Tesla’s many fraudulent claims about “collision avoidance” to juice investments since 2016 have always been a fraud.

Source: Twitter

There have been crashes in this spot before, yet never with this level of damage. Why did the driver in the other car die? Likely speed. Tesla is unique in increasing harms on roads because its CEO shamelessly pushes owners to ignore even Tesla’s own warnings about safe operation of a vehicle, while giving them a far more dangerous high-speed vehicle.

It would be better if Tesla either stopped warning it’s drivers about danger, or if it’s CEO stopped overpromising false safety, because the combination of the two is the worst possible situation.

In other words, if it was just a CEO lying about safety people would likely become skeptical. Or if it was just a carbrand warning about dangers, people would likely be mindful. Instead, the “authoritative” lies next to generic warnings negate important mindfulness, a sad social engineering trick that a CEO has been abusing to get more drivers… killed.

With this crash the Tesla death toll is likely to rise over 400, with over 50 people killed just in the first half of 2023.

When does it become mass murder?

Veered MI Tesla: Autopilot Accused of Trying to Kill Its Owner

A new police report has language much closer to what should have been reported in every Tesla crash since 2016.

The owner of the car says it tried to kill her.

As soon as she enabled her badly misnamed Autopilot, or perhaps after she turned on the badly misnamed FSD (at this point just call it crap), she alleges it veered across lines and flipped itself into a tree.

…in the hospital Saturday night after she put her Tesla into self-driving mode and the car veered off the roadway, hit a tree and rolled several times, deputies said.

Note that is law enforcement making a statement that a car was “self-driving” when it “veered” suddenly. It only adds to the several warnings I’ve written recently given multiple similar “veered” crashes.

Here’s another detailed version of this new crash report:

At 6:41 p.m. Sunday, the driver was traveling westbound on McKinley Road and placed the Tesla she was driving into self-driving mode. Once selected, the vehicle pulled to the right, went off the road causing it to roll several times and hit a tree, according to Mecosta County deputies.

Again, it’s not being reported in the local news as “the driver said” or alleged, but rather as a preliminary statement of fact by responders: “Once selected…[that crap] went off the road…” on an empty rural two-lane paved road with clear markings near “135th Ave” (a tiny dirt lane).

It’s actually noteworthy the car’s algorithm managed to ignore bright lines and find a tree to hit in a mostly empty area.

Source: Google StreetView

What was Colfax County, Michigan weather at the time of crash? Sunday May 28 was dry (0.00 precipitation for 7 days prior), clear and sunny reaching high of 87F (low 43F). She was traveling westbound, with sunset over two hours later at 9:20PM.

There seems to be a worsening effect with Tesla safety (more danger in newer models), related to subpar engineering management practices, rushed production and zero regard for human safety. In one very recent example the Tesla was destroyed and owner dead within the first 600 miles of “driverless” being used.

Model Year: 2023
Mileage: 590
Date: March 2023
Time: 05:15 (10:15 PM local time)

For what it’s worth, I worked with security researchers on safety tests back in 2016 that proved Tesla could veer suddenly off the road and crash (that was the last time I have ridden in one). I have never stopped warning about it since then, including a keynote talk at a security conference.

My predictions of more death due to more Tesla being on the road has been painfully accurate, even though in the summer of 2016 it was considered scandalous to even dare to suggest such a thing.

Source: Tesladeaths.com

Talk about an early warning of AI dangers, unfortunately people still bought into the fraud… and now hundreds are dead (over 30 confirmed fault of AI). What’s new? Given what I’m seeing in 2023 police reports, ongoing Tesla design failures coupled with their business problems means safety will likely continue to decline through this year.

Consider please, despite all the hype and bombast about becoming the safest car on the road by charging its customers huge fees for “full self driving”, in 2023 we still read about a wrong-way veered Tesla:

CHP does not know what caused the Tesla to drive in the wrong direction… the Tesla was engulfed in flames. Two passengers inside the Tesla and the driver of the Acura were all pronounced dead at the scene.

Just like in 2013 (a decade ago) we used to read about a wrong-way veered Tesla:

Investigators said the Tesla was leaving Laguna Beach and veered into oncoming traffic… Two adult men inside a “severely damaged” Honda Accord were declared dead at the scene, officials said.

What a veered car.

American Edition: Unveiling Nuances in AI Safety and Transparency

Here is alternative take to an earlier blog post, in response to a suggestion from a member of my fan club.


Good evening, ladies and gentlemen. Today, let us embark on an intriguing hypothetical journey, where we contemplate a scenario involving a distinguished company like Tesla. Picture this: a public declaration of unwavering commitment to enhancing road safety, while harboring concealed intentions to undermine it. Such a thought-provoking notion compels us to question the inherent risks associated with the development of artificial intelligence (AI). Researchers have shed light on the concept of “dual-use discovery,” the potential for AI systems, initially designed to enhance safety, to be repurposed and wielded for significant harm. However, delving into this complex subject requires a closer examination of the intricate nature of AI’s role and the responsibilities entailed in safeguarding its applications.

Ladies and gentlemen, the intricacies of AI safety are comparable to the multifaceted nature of everyday tools such as chef knives or hammers, which possess both utilitarian and destructive potential. This debate surrounding AI’s dual-use dilemma extends beyond the dichotomy of hunting rifles and military assault weapons. The complexity lies in the fact that society has become desensitized to the adverse outcomes resulting from a shift from individual freedom to a sudden, widespread catastrophic misuse of automated machines. Often, accidents and their consequences are viewed as isolated incidents, failing to address the systemic failures linked to profit-oriented structures driven by technological advancements that exacerbate the criminalization of poverty.

To navigate the intricacies of AI’s threat models, it becomes crucial to scrutinize the purpose and application of data pertaining to the use of such tools. We must carefully assess how these systems are operated and evaluate the potential for systemic failures due to lack of basic precaution. This examination is akin to instances involving controversial products like lawn darts or the infamous “Audi pedal.” The existence of failsafe mechanisms and adherence to safety protocols also warrant rigorous evaluation.

As society progresses, these complex problems manifest in tangible terms. While previous attempts to raise awareness on these issues were met with limited understanding, today, we witness a heightened recognition of the implications surrounding a single algorithm governing millions of interconnected vehicles, capable of acting as a dispersed “bad actor” swarm. Within this context, it becomes imperative to demand companies like Tesla to provide some real transparency for the first time, and provide substantiated evidence regarding the safety and integrity of their operations.

Ladies and gentlemen, addressing concerns regarding Tesla’s commitment to transparency and accountability within this realm is paramount to national safety. Can the company substantiate that its vehicles are neither designed nor capable of functioning as country-wide munitions, endangering the lives of countless individuals? Furthermore, are the escalating death tolls, which have been a cause for mounting alarm since 2016, indicative of an intentional disregard for the rule of law through their development, disregard, enabling, or negligent employment of AI technology?

In conclusion, the realm of AI, safety, and transparency presents a multifaceted puzzle that demands careful consideration by those most skilled and experienced with representative governance. It is crucial to foster comprehensive understanding and take a proactive approach to address the intricacies associated with the development, implementation, and accountability of AI systems. Catastrophic harms are already happening, and we must acknowledge these challenges and embrace transparency as a means to enact bans on threats to society. By doing so, we can continue on the time-tested path of responsible and ethical integration of technology into our lives.

And that’s the way it is.

The most trusted man in America