Another Day, Another Tesla Owner “Thankful” to Not Be Burned Alive

A man who owned a Tesla that “spontaneously combusted” while driving allegedly said “it’s all gone” after he walked away with his life.

That feeling of loss is very on brand for a car company that can’t explain why it has so many fires.

Source: KCRA

You buy Tesla, you lose it all.

The owner mentioned that his two very young kids fortunately weren’t strapped in their carseats, which reminds me of another Tesla father’s unexplained “spontaneous combustion” nightmare watching his kid’s carseat melting.

Tesla is unique in the car industry for these stories, what can only be described as willful neglect regarding fire investigation and resolution. I’ll never forget the NHTSA Complaint (#11466262) by a father wailing publicly about his son being burned-to-death.

Can Tesla be forced to care or will the market just move on without them?

Tesla-fire.com as of January 2023 was nearing a 200 mark, already reporting over 50 deaths.

Over 50 deaths!

Other manufacturers intensively research issues and make very specific recalls as a proactive measure. If you read NHTSA data you see a marked difference.

Tesla has repeatedly said it doesn’t understand why its fires happen so often, it points fingers at others, and it clearly hopes people can just act “thankful” for nearly dying in fires instead of being realistic about prevention.

Last January, for example, we heard a similar story.

A Tesla car battery “spontaneously” burst into flames on a California freeway Saturday, and firefighters needed 6,000 gallons of water to put it out. The Metro Fire Department said in a series of tweets that “nothing unusual” had occurred before the Tesla Model S became “engulfed in flames”…

A giant fire would be treated as unusual for any other car maker. And they would be reported as such. Hyundai, Kia and Ford electrical fires, for example, have posted multiple specific resolutions over a decade including NHTSA “engineering analysis” of non-crash spontaneous electrical fires.

For Tesla?

Fires get called “nothing unusual” as owners try to run from a flaming death trap. The car that was promised to them as safer quickly is proven less safe. How can that predictive disconnect go on much longer? It will get better? That’s not how any of this works. It will get worse unless there is proof of improvement (e.g. Shewhart/Deming PDCA/PDSA quality measurement models known and proven since the 1920s).

Tesla quality failures get worse over time. Later models show pattern of safety decline.

Now for some analysis related to Tesla’s ill-conceived and rushed “dominance”, flooding an electric vehicle market with intentional very low quality.

Tesla specifically is to blame for a shockingly high death toll related directly to its known design failures (e.g. multiple cases of occupants unable to escape during intense fire, burning them alive).

Other car companies run far lesser fire risks because, when they do have a flaw, they get investigated and recalled far more proactively.

Let’s talk about Tesla’s fire-to-root-cause ratio, for example. Who keeps that tally public? How many incidents sit open with empty answers?

We see hundreds and hundreds of Tesla complaints, including tragic mention of death by fire, while competing brands have zero complaints.

Consider also how combustion engine fires very often are rooted in electrical systems (e.g. loose wiring harness). Getting to root cause means true “electric fire” recalls are far higher than Tesla ever admits when they try to confuse analysts by calling them combustion.

Examples in 2020, just for sake of illustration?

  • Electrical short fire recall: over 400,000 Hyundai Elantra
  • Electrical short fire recall: over 300,000 Kia Cadenza & Sportage
  • Electrical short fire recall: over 200,000 Honda Odyssey
  • Touch-screen fire recall: over 200,000 Tesla Model X & S (and 2021 because they missed some)

Electrical fires.

All of them, electrical. Tesla knows this yet cruely twists reporting to falsely make it seem like electrical fires will somehow magically become a low risk after decades of data proving the opposite.

Or, let me put it another way.

Gasoline cars have batteries and wires. The batteries and wires cause fires at a very high rate, second only to fuel leaks.

So if you remove only gasoline, you’re still going to be looking into a LOT OF FIRES from… electricity.

Combustion engine fire recalls for electrical systems should be reported as such, so electrical fires now would make sense in context of always being a problem.

Yeah, electrical systems have caused a lot of fires and millions of recalled vehicles. Soooo, Tesla should have read those tea leaves and figured their cars would have a problem, a big problem with fires.

How many gasoline fires are at a station during refuel, or how many are just driving on an on-ramp? Here again, Tesla seems particularly unique in regard to risk because drivers with no warning, a false sense of safety from marketed overconfidence, are in sudden grave danger.

If a combustion engine tank is ruptured in a crash, and there are high rates of crashes, then that isn’t really comparable to Tesla electric cars repeatedly bursting into flames without any “known” reason other than… electrical risks everyone else is actually talking about and fixing.

…41 crashes vs 20,315 crashes vs 543 crashes make it statistically irresponsible to compare these numbers. For example, if there was a 42nd crash with an EV and it caught on fire then it would be 4.76% of EVs or double the [worst] rate…

Here’s a great hypothesis to examine: removing petroleum fuels will eliminate the current highest area of fire risk, yet the total fire risk may increase as a result from unsafe electrical systems rushed to market by Tesla.

For those still curious about fuel leak cases, the manufacturers talk about simple sensor indicators that can give drivers advance warning about risks of fire.

Advance warning of Tesla fire? That would require Tesla admitting their problems, admitting regulators save lives by forcing ground truth.

Proof of Tesla worsening the market ahead comes further from the fact that after their many fires for “unexplained reasons”, multiple more fires start over many days, allegedly with no ability to predict either.

Should we count each of these Tesla fires individually and adequately?

It seems so. I may ask tesla-fires.com to add a column so we can have a proper muliplier.

There have been nearly 200 fires, yet since Tesla fires are known to restart again and again without proper explanation it should be counted more like 500 or higher.

As a best guess, new short-circuits happen every time a wrecked Tesla battery is shifted so another electrical system fire starts… but who counts all these as unique and different? When does Tesla admit they know the root cause and take proper action to save lives and reduce taxpayer/societal burden from Tesla’s cheap designs and weak engineering?

Dealerships, repair shops, tow-trucks and junkyards have been reporting explosive uncontrollable Tesla fires unlike anything seen in the modern world of highly regulated gasoline safety.

The point is that anyone launching any modern electric car should have treated fire risk as their top engineering priority, and now should recognize immediately how things are worsening (e.g. spontaneous combustion without explanation), instead of repeatedly claiming surprise and ignorance.

Every Chevy Bolt was recalled when it showed even slight risk of fire. I’d thus easily recommend a Bolt over Tesla; the safety/fatality data when comparing the two makes it clear why.

Any electric cars with evidence of ignorant management should be grounded until fire risk is independently studied and verified as eliminated. What does eliminated mean?

There should be no excuses for ignoring the problem, no tolerance of basic safety negligence. No statements of whataboutism. This is not new or different from any car. GM and others have done the right thing on multiple levels, including training public fire crews, with their new electric vehicles. Tesla never seems to care at all, as illustrated by their unique fire death tolls.

At this point we should ask why would any father take such highly unnecessary risks and put children in an inexplicably flammable Tesla.

New Porsche Dash Buttons Are Like Nails in Tesla’s Coffin

Tesla designs are the gaudy jewelry and tacky gemstones of the car industry; we shouldn’t even call them engineering.

The Tesla dashboard seems to be only artificially, superficially, designed. A big piece of glass glued to the dash is cheap and lazy. The touchscreen certainly is not meant for anyone who thinks deeply about anything, let alone understands driving.

Take the counter example, the luxury brand Cadillac recently announced buttons to wow critics.

Now Porsche is doing the same, after Audi.

An almost entirely touchscreen user interface was probably Porsche’s only misstep with the otherwise-excellent Taycan. Indeed, when Audi used that car’s platform to make its own electric express, the e-tron GT, it was notable that the climate control touchscreen was gone, replaced with actual buttons…

To be fair, industry innovation leaders (and best EV makers) Nissan and Hyundai never stopped offering buttons.

The explanation is simple. Buttons provide control without sacrificing safety, reliably delivering high value with minimal cost to users.

Touchscreens are the opposite, undermining control, unreliable, and often at high expense to users. Tesla Semis, all brand new and meant to be a showcase of their best ideas, allegedly have been pulled off the roads because their screens catastrophically fail so often.

Before that, and you’d think they could have learned something from it, Tesla had to recall over a hundred thousand vehicles due to failing screens.

Because Tesla decided to put the defroster/defogger button and other climate controls on the screen, rather than using physical buttons, a screen failure falls into the safety domain.

Their screen always falls into the safety domain in the same way people should never text while driving.

Navigating through various levels of menus to reach a desired control can be particularly dangerous; one study by the AAA Foundation concluded that infotainment touch screens can distract a driver for up to 40 seconds, long enough to cover half a mile at 50 mph.

And on that note, as the VP of legal at Tesla admitted in that recall, their screens fail within five years.

That’s fashion, NOT function.

Intentional failures leading to rapid unplanned obsolescence, should also be known as a terrible investment.

Heads up (pun intended), buttons are better for everyone. Luxury brands are meant to sell what’s good by long measures, not poorly made shiny things to rapidly devalue and force wasteful/unsafe consumption.

We’re talking very basic logic here, the kind that suggests nobody should ever buy another Tesla… or a diamond (piece of glass glued to a ring).

…we got tricked for about century into coveting sparkling pieces of carbon, but it’s time to end the nonsense.

Financial analysts point to Tesla’s declining earnings, declining revenue and declining free cash flow as seriously problematic. Safety experts point to rising fatalities, failing designs, worsening quality over time and an inability to learn.

Underneath them all is the simple fact that the CEO of Tesla runs it like a South African blood diamond mine; an oversupply of cheap assets made by exploited labor and marketed through mindless “caste” propaganda about “simplifying systems of control“… to sustain obvious gross overvaluation.

…until the 18th Century, the formal distinctions of caste were of limited importance to Indians, social identities were much more flexible and people could move easily from one caste to another. New research shows that hard boundaries were set by British colonial rulers who made caste India’s defining social feature when they used censuses to simplify the system, primarily to create a single society with a common law that could be easily governed.

Think of Tesla as low grade fashion tokens falsely promising high caste status, instead making people more easily governed (less free, at high risk of sudden death).

Buttons thus are symptomatic of sensibilities returning to a market, providing engineered longevity with an emphasis on safety and utility. The well-designed button represents freedom — quality of life in a car — as an important moral proof.

Don’t buy a car without them.

Why Open-Source AI is Faster, Safer and More Intelligent than Google or OpenAI

A “moat” historically meant a physical method to reduce threats to a group intended to fit inside it. Take for example the large Buhen fortress on the banks of the Nile. Built by Pharaoh Senwosret III around 1860 BCE, it boasted a high-tech ten meter high wall next to a three meter deep moat to protect his markets against Nubians who were brave enough to fight against occupation and exploitation.

Hieroglyphics roughly translated: “Just so you know, past this point sits the guy who killed your men, enslaved your women and children, burnt your crops and poisoned your wells. Still coming?”

Egyptian Boundary Stele of Senwosret III, ca. 1878-1840 B.C., Middle Kingdom. Quartzite; H. 160 cm; W. 96 cm. On loan to Metropolitan Museum of Art, New York (MK.005). http://www.metmuseum.org/Collections/search-the-collections/591230

Complicated, I suppose, since being safe inside such a moat meant protection against threats, yet being outside was defined as being a threat.

Go inside and lose freedom, go outside and lose even more? Sounds like Facebook’s profit model can be traced back to stone tablets.

Anyway, in true Silicon Valley fashion of ignoring complex human science, technology companies have been expecting to survive an inherent inability to scale by relying on building primitive “moats” to prevent groups inside from escaping to more freedom.

Basically moats used to be defined as physically protecting markets from raids, and lately have been redefined as protecting online raiders from markets. “Digital moats” are framed for investors as a means to undermine market safety — profit from users enticed inside who then are denied any real option to exit outside.

Unregulated highly-centralized proprietary technology brands have modeled themselves as a rise of unrepresentative digital Pharoahs who are shamelessly attempting new forms of indentured servitude despite everything in history saying BAD IDEA, VERY BAD.

Now for some breaking news:

Google has been exposed by an alleged internal panic memo about profitability of future servitude, admitting “We Have No Moat, And Neither Does OpenAI

While we’ve been squabbling, a third faction has been quietly eating our lunch. I’m talking, of course, about open source. Plainly put, they are lapping us. Things we consider “major open problems” are solved and in people’s hands today. […] Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months.

One week! Stunning pace of improvement. https://lmsys.org/blog/2023-03-30-vicuna/

It’s absolutely clear this worry and fret from Google insiders comes down to several key issues. The following paragraph in particular caught my attention since it feels like I’ve been harping about this for at least a decade already:

Data quality scales better than data size
Many of these projects are saving time by training on small, highly curated datasets. This suggests there is some flexibility in data scaling laws. The existence of such datasets follows from the line of thinking in Data Doesn’t Do What You Think, and they are rapidly becoming the standard way to do training outside Google.

There has to be common sense about this. Anyone who thinks about thinking (let alone writing code) knows a minor change is more sustainable for scale than complete restarts. The final analysis is that learning improvements grow bigger faster and better through fine-tuning/stacking on low-cost consumer machines instead of completely rebuilding upon each change using giant industrial engines.

…the model can be cheaply kept up to date, without ever having to pay the cost of a full run.

You can scale a market of ideas better through a system designed for distributed linked knowledge with safety mechanisms, rather than planning to build a new castle wall every time a stall is changed or new one opened.

Building centralized “data lakes” was a hot profit ticket in 2012 that blew-up spectacularly just a few years later. I don’t think people realized social science theory like “fog of war” had told them not to do it, but they definitely should have walked away from “largest” thinking right then.

Instead?

OpenAI was born in 2015 on the sunset phase of a wrong model mindset. Fun fact: I once was approached and asked to be CISO for OpenAI. Guess why I immediately refused and instead went to work on massively distributed high-integrity models of data for AI (e.g. W3C Solid)?

…maintaining some of the largest models on the planet actually puts us at a disadvantage.

Yup. Basically confidentiality failures that California breach law SB1386 hinted at way back in 2003, let alone more recent attempts to stop integrity failures.

Tech giants have vowed many times to combat propaganda around elections, fake news about the COVID-19 vaccines, pornography and child exploitation, and hateful messaging targeting ethnic groups. But they have been unsuccessful, research and news events show.

Bad Pharaohs.

Can’t trust them, as the philosopher David Hume sagely warned in the 1700s.

To me the Google memo reads as if pulled out of a dusty folder: an old IBM fret that open communities running on Sun Microsystems (get it? a MICRO system) using wide-area networks to keep knowledge cheaply up to date… will be a problem for mainframe profitability that depends on monopoly-like exit barriers.

Exiting times, in other words, to be working with open source and standards to set people free. Oops, meant to say exciting times.

Same as it’s ever been.

There is often an assumption that operations should be large and centralized in order to be scalable, even though such thinking is provably backwards.

I suspect many try to justify such centralization due to cognitive bias, not to mention hedging benefits away from a community and into a just small number of hands.

People sooth fears through promotion of competition-driven reductions; simple “quick win” models (primarily helping themselves) are hitched to a stated need for defense, without transparency. They don’t expend effort on wiser, longer-term yet sustainable efforts of more interoperable, diverse and complex models that could account for wider benefits.

The latter models actually scale, while the former models give an impression of scale until they can’t.

What the former models do when struggling to scale is something perhaps right out of ancient history. Like what happens when a market outgrows the slowly-built stone walls of a “protective” monarchist’s control.

Pharoahs are history for a reason.

The AI Trust Problem Isn’t Fakes. The AI Trust Problem Is Fakes.

See what I did there? It worries me that too many people are forgetting almost nobody really has been able to tell what is true since… forever. I gave a great example of this in 2020: Abraham Lincoln.

A print of abolitionist U.S. President Abraham Lincoln was in fact a composite, a fake. Thomas Hicks had placed Lincoln’s unmistakable head on the distinguishable body of Andrew Jackson’s rabidly pro-slavery Vice President John Calhoun. A very intentionally political act.

The fakery went quietly along until Stefan Lorant, art director for London Picture Post magazine, noticed a very obvious key to unlock Hick’s puzzle — Lincoln’s mole was on the wrong side of his face.

Here’s a story about Gary Marcus, a renowned AI expert, basically ignoring all the context in the world:

His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”

Will be flooded?

That is literally what the Internet has done since its origin. The printing press flooded the average person. The radio flooded the average person. It’s not like the Internet, in true reinvention of the wheel fashion, grew in popularity because it came with an inherent truth filter.

The opposite, bad packets were always there for bad actors and — despite a huge amount of money invested for decades into “defenses” — many bad packets continue to flow.

Markets always are dangerous, deceptive places if left without systems of trust formed with morality (as philosopher David Hume explained rather clearly in the 1700s, perhaps too clearly given the church then chastised him for being a thinker/non-believer).

“Magritte’s original title of this painting was L’usage de la parole I (The Use of Speech I), which implies that we should question the veracity of the words, not the image.” Source: BBC

So where does our confidence and ability to move forward stem from? Starting a garden (pun not intended) requires a plan to assess and address, curate if you will, risky growth. We put speed limiters into use to ensure pivots, such as making a turn or changing lanes, won’t be terminal.

Historians, let alone philosophers and anthropologists, might even argue that having troubles with the truth has been the human condition across all communication methods for centuries if not longer. Does my “professor of the year” blog post from 2008 or the social construction of reality ring any bells?

Americans really should know exactly what to do, given it has such a long history of regulating speech with lots of censorship; from the awful gag rule to preserve slavery, or banning Black Americans from viewing Birth of a Nation, all the way to the cleverly named WWII Office of Censorship.

What’s that? You’ve never heard of the U.S. Office of Censorship, or read its important report from 1945 saying Americans are far better off because of their work?

This is me unsurprised. Let me put it another way. Study history when you want to curate a better future, especially if you want to regulate AI.

Not only study history to understand the true source of troubles brewing now, growing worse by the day… but also to learn where and how to find truth in an ocean of lies generated by flagrant charlatans (e.g. Tesla wouldn’t exist without fraud, as I presented in 2016).

If more people studied history for more time we could worry less about the general public having skills in finding truth. Elon Musk probably would be in jail. Sadly the number of people getting history degrees has been in decline, while the number of people killed by a Tesla skyrockets. Already 19 dead from Elon Musk spreading flagrant untruths about AI. See the problem?

The average person doesn’t know what is true, but they know who they trust; a resulting power distribution is known by them almost instinctively. They follow some path of ascertaining truism through family, groups, associations, “celebrity” etc. that provide them a sense of safety even when truth is absent. And few (in America especially) are encouraged to steep themselves in the kinds of thinking that break away from easy, routine and minimal judgment contexts.

Just one example of historians at work is a new book about finding truth in the average person’s sea of lies, called Myth America. It was sent to me by very prominent historians talking about how little everyone really knows right now, exposing truths against some very popular American falsehoods.

This book is great.

Yet who will have the time and mindset to read it calmly and ponder the future deeply when they’re just trying to earn enough to feed their kids and cover rising medical bills to stay out of debtor prison?

Also books are old technology so they are read with heaps of skepticism. People start by wondering whether to trust the authors, the editors and so forth. AI, as with any disruptive technology in history, throws that askew and strains power dynamics (why do you think printing presses were burned by 1830s American cancel culture?).

People carry bias into their uncertainty, which predictably disarms certain forms of caution/resistance during a disruption caused by new technology. They want to believe in something, swimming towards a newly fabricated reality and grasping onto things that appear to float.

It is similar to Advanced Fee Fraud working so well with email these days instead of paper letters. An attacker falsely promises great rewards later, a pitch about safety, if the target reader is willing (greedy) to believe in some immediate lies coming through their computer screen.

Thus the danger is not just in falsehoods, which surround us all the time our whole lives, but how old falsehoods get replaced with new falsehoods through a disruptive new medium of delivery: fear during rapid changes to the falsehoods believed.

What do you mean boys can’t wear pink, given it was a military tradition for decades? Who says girls aren’t good at computers when they literally invented programming and led the hardware and software teams where quality mattered most (e.g. Bletchly Park was over 60% women)?

This is best understood as a power shift process that invites radical even abrupt breaks depending on who tries to gain control over society, who can amass power and how!

Knowledge is poweful stuff; creation and curation of what people “know” is often thus political. How dare you prove the world is not flat, undermining the authority of those who told people untruths?

AI can very rapidly rotate on falsehoods like the world being flat, replacing known and stable ones with some new and very unstable, dangerous untruths. Much of this is like the stuff we should all study from way back in the late 1700s.

It’s exactly the kind of automated information explosion the world experienced during industrialization, leading people eventually into world wars. Here’s a falsehood that a lot of people believed as an example: fascism.

Old falsehoods during industrialization fell away (e.g. a strong man is a worker who doesn’t need breaks and healthcare) and were replaced with new falsehoods (e.g. a strong machine is a worker that doesn’t need strict quality control, careful human oversight and very narrow purpose).

The uncertainty of sudden changes in who or what to believe next (power) very clearly scares people, especially in environments unprepared to handle surges of discontent when falsehoods or even truths rotate.

Inability to address political discontent (whether based in things false or true) is why the French experienced a violent disruptive revolution yet Germany and England did not.

That’s why the more fundamental problem is how Americans can immediately develop methods for reaching a middle ground as a favored political position on technology, instead of only a left and right (divisive terms from the explosive French Revolution).

New falsehoods need new leadership through a deliberative and thoughtful process of change, handling the ever-present falsehoods people depend upon for a feeling of safety.

Without the U.S. political climate forming a strong alliance, something that can hold a middle ground, AI can easily accelerate polarization that historically presages a slide into hot war to handle transitions — political contests won by other means.

Right, Shakespeare?

The poet describes a relationship built on mutual deception that deceives neither party: the mistress claims constancy and the poet claims youth.

When my love swears that she is made of truth
I do believe her though I know she lies,
That she might think me some untutored youth,
Unlearnèd in the world’s false subtleties.
Thus vainly thinking that she thinks me young,
Although she knows my days are past the best,
Simply I credit her false-speaking tongue;
On both sides thus is simple truth suppressed.
But wherefore says she not she is unjust?
And wherefore say not I that I am old?
O, love’s best habit is in seeming trust,
And age in love loves not to have years told.
Therefore I lie with her and she with me,
And in our faults by lies we flattered be.