Veered MI Tesla: Autopilot Accused of Trying to Kill Its Owner

A new police report has language much closer to what should have been reported in every Tesla crash since 2016.

The owner of the car says it tried to kill her.

As soon as she enabled her badly misnamed Autopilot, or perhaps after she turned on the badly misnamed FSD (at this point just call it crap), she alleges it veered across lines and flipped itself into a tree.

…in the hospital Saturday night after she put her Tesla into self-driving mode and the car veered off the roadway, hit a tree and rolled several times, deputies said.

Note that is law enforcement making a statement that a car was “self-driving” when it “veered” suddenly. It only adds to the several warnings I’ve written recently given multiple similar “veered” crashes.

Here’s another detailed version of this new crash report:

At 6:41 p.m. Sunday, the driver was traveling westbound on McKinley Road and placed the Tesla she was driving into self-driving mode. Once selected, the vehicle pulled to the right, went off the road causing it to roll several times and hit a tree, according to Mecosta County deputies.

Again, it’s not being reported in the local news as “the driver said” or alleged, but rather as a preliminary statement of fact by responders: “Once selected…[that crap] went off the road…” on an empty rural two-lane paved road with clear markings near “135th Ave” (a tiny dirt lane).

It’s actually noteworthy the car’s algorithm managed to ignore bright lines and find a tree to hit in a mostly empty area.

Source: Google StreetView

What was Colfax County, Michigan weather at the time of crash? Sunday May 28 was dry (0.00 precipitation for 7 days prior), clear and sunny reaching high of 87F (low 43F). She was traveling westbound, with sunset over two hours later at 9:20PM.

There seems to be a worsening effect with Tesla safety (more danger in newer models), related to subpar engineering management practices, rushed production and zero regard for human safety. In one very recent example the Tesla was destroyed and owner dead within the first 600 miles of “driverless” being used.

Model Year: 2023
Mileage: 590
Date: March 2023
Time: 05:15 (10:15 PM local time)

For what it’s worth, I worked with security researchers on safety tests back in 2016 that proved Tesla could veer suddenly off the road and crash (that was the last time I have ridden in one). I have never stopped warning about it since then, including a keynote talk at a security conference.

My predictions of more death due to more Tesla being on the road has been painfully accurate, even though in the summer of 2016 it was considered scandalous to even dare to suggest such a thing.

Source: Tesladeaths.com

Talk about an early warning of AI dangers, unfortunately people still bought into the fraud… and now hundreds are dead (over 30 confirmed fault of AI). What’s new? Given what I’m seeing in 2023 police reports, ongoing Tesla design failures coupled with their business problems means safety will likely continue to decline through this year.

Consider please, despite all the hype and bombast about becoming the safest car on the road by charging its customers huge fees for “full self driving”, in 2023 we still read about a wrong-way veered Tesla:

CHP does not know what caused the Tesla to drive in the wrong direction… the Tesla was engulfed in flames. Two passengers inside the Tesla and the driver of the Acura were all pronounced dead at the scene.

Just like in 2013 (a decade ago) we used to read about a wrong-way veered Tesla:

Investigators said the Tesla was leaving Laguna Beach and veered into oncoming traffic… Two adult men inside a “severely damaged” Honda Accord were declared dead at the scene, officials said.

What a veered car.

American Edition: Unveiling Nuances in AI Safety and Transparency

Here is alternative take to an earlier blog post, in response to a suggestion from a member of my fan club.


Good evening, ladies and gentlemen. Today, let us embark on an intriguing hypothetical journey, where we contemplate a scenario involving a distinguished company like Tesla. Picture this: a public declaration of unwavering commitment to enhancing road safety, while harboring concealed intentions to undermine it. Such a thought-provoking notion compels us to question the inherent risks associated with the development of artificial intelligence (AI). Researchers have shed light on the concept of “dual-use discovery,” the potential for AI systems, initially designed to enhance safety, to be repurposed and wielded for significant harm. However, delving into this complex subject requires a closer examination of the intricate nature of AI’s role and the responsibilities entailed in safeguarding its applications.

Ladies and gentlemen, the intricacies of AI safety are comparable to the multifaceted nature of everyday tools such as chef knives or hammers, which possess both utilitarian and destructive potential. This debate surrounding AI’s dual-use dilemma extends beyond the dichotomy of hunting rifles and military assault weapons. The complexity lies in the fact that society has become desensitized to the adverse outcomes resulting from a shift from individual freedom to a sudden, widespread catastrophic misuse of automated machines. Often, accidents and their consequences are viewed as isolated incidents, failing to address the systemic failures linked to profit-oriented structures driven by technological advancements that exacerbate the criminalization of poverty.

To navigate the intricacies of AI’s threat models, it becomes crucial to scrutinize the purpose and application of data pertaining to the use of such tools. We must carefully assess how these systems are operated and evaluate the potential for systemic failures due to lack of basic precaution. This examination is akin to instances involving controversial products like lawn darts or the infamous “Audi pedal.” The existence of failsafe mechanisms and adherence to safety protocols also warrant rigorous evaluation.

As society progresses, these complex problems manifest in tangible terms. While previous attempts to raise awareness on these issues were met with limited understanding, today, we witness a heightened recognition of the implications surrounding a single algorithm governing millions of interconnected vehicles, capable of acting as a dispersed “bad actor” swarm. Within this context, it becomes imperative to demand companies like Tesla to provide some real transparency for the first time, and provide substantiated evidence regarding the safety and integrity of their operations.

Ladies and gentlemen, addressing concerns regarding Tesla’s commitment to transparency and accountability within this realm is paramount to national safety. Can the company substantiate that its vehicles are neither designed nor capable of functioning as country-wide munitions, endangering the lives of countless individuals? Furthermore, are the escalating death tolls, which have been a cause for mounting alarm since 2016, indicative of an intentional disregard for the rule of law through their development, disregard, enabling, or negligent employment of AI technology?

In conclusion, the realm of AI, safety, and transparency presents a multifaceted puzzle that demands careful consideration by those most skilled and experienced with representative governance. It is crucial to foster comprehensive understanding and take a proactive approach to address the intricacies associated with the development, implementation, and accountability of AI systems. Catastrophic harms are already happening, and we must acknowledge these challenges and embrace transparency as a means to enact bans on threats to society. By doing so, we can continue on the time-tested path of responsible and ethical integration of technology into our lives.

And that’s the way it is.

The most trusted man in America

Fatigued by Breaches? Demand Lifeboats, Drowning is Not Destiny

Dr. Tim Sandle has kindly written in Digital Journal about my thinking on why people are often described, even self-described, as so fatigued about privacy breaches.

For Ottenheimer there are two currents to report on (or ‘general themes’) when looking at public concerns about data leaks. He explains these as: “The first is a rise of apathy and hopelessness that is often found in centrally-controlled systems that use “digital moats” to deny freedom of exit. Should people be concerned if they’re in a boat on the ocean that has reported leaks?”

Expanding on this, Ottenheimer moves things back to the Enlightenment, quoting a conservative thinker: “The philosopher David Hume wrote about this in 1748, in Of the Original Contract, explaining how consent is critical to freedom, and would enable a rise in concern”.

Quoting Hume, Ottenheimer cites: “We may as well assert, that a man, by remaining in a vessel, freely consents to the dominion of the master; though he was carried on board while asleep, and must leap into the ocean and perish, the moment he leaves her.”

In his subsequent analysis, Ottenheimer points out: “If there is no freedom from leaks then you may witness declining public concern in them as well. That’s a terrible state of affairs for people accepting they’re destined to drown when they should be demanding lifeboats instead.”

Pen Testers Need to Hack AI… Out of Existence

Robert Lemos wrote an excellent introduction to my RSA SF conference talk, over at DarkReading.

A steady stream of security researchers and technologists have already found ways to circumvent protections placed on AI systems, but society needs to have broader discussions about how to test and improve safety, say Ottenheimer…. “Especially from the context of a pentest, I’m supposed to go in and basically assess [an AI system] for safety, but what’s missing is that we’re not making a decision about whether it is safe, whether the application is acceptable,” he says. A server’s security, for example, does not speak to whether the system is safe “if you are running the server in a way that’s unacceptable … and we need to get to that level with AI.”

My presentation is available on the RSA SF conference site now, for those with a pass.