Category Archives: History

Brand New Tesla Owner Testing Autopilot Killed by F-150

A Tesla has been in a crash with a F-150, leaving the Tesla owner dead. It quietly revealed yet again that the “Autopilot crap” is really, and always will be, crap.

One of the shocking things in April 2016 was that a man who clearly knew nothing about cars positioned a deceptively named non-working “Autopilot” as already achieving collision avoidance.

Source: Twitter

An entirely avoidable near-miss incident video shared by Josh Brown was unapologetically spun into a dangerously misleading Twitter PR campaign (disinformation, which killed Brown himself only weeks later while believing the CEO’s suggestion collisions would be avoided).

As you probably know by now that his unnecessary death (and the Tesla CEO response that radar would be added to cars to prevent any more deaths, a statement he then reversed to increase margins on sales), sparked a series of talks by me over the subsequent years about the fraud (and gaslighting).

I hoped to raise awareness from a safety engineering perspective. Tesla seemed to be run by someone detached from reality, uncaring about human life, which would kill more people than ever.

While my warnings about this have been accurate, I’m still not sure the right people are getting the blazingly loud tesladeaths.com message yet.

Source: Tesladeaths.com

I suppose it was easy for me to predict Tesla’s early “safest car” claims as fraud. That is because I look deeply into the data, look deeply into the technology, and… let’s be honest, this isn’t rocket science.

It’s actually pretty easy stuff for anyone to see. Just take a little time to read, know how to drive a car, and don’t believe hype from a man who gets enraged whenever his name isn’t mentioned in news about Tesla.

Here’s the latest problem, after it quietly leaked out of Corona police reports. The initial safety bulletin was sparsely written:

On March 28, 2023, at 10:16 P.M., the Corona Police Department responded to a traffic collision at the intersection of Foothill Parkway and Rimpau Avenue. Upon arrival, officers discovered that a Ford F-150 truck had entered the intersection against a red light and collided with a Tesla. The driver of the Tesla, a 43- year-old male resident of Corona, succumbed to his injuries at the scene…

Red light crash? That is one of my core areas of control failure research that I have spoken out about for years — like talking about firewall rule bypass, which should make any SOC analyst’s ears burn.

To me the report reads like a Tesla in 2023 did not avoid a collision, despite such promised capability. That is a very big clue for predicting more crashes, worth investigating further (e.g. the entire basis of my presentations/predictions for the past seven years).

After months of watching and waiting on Corona, California for more details, nothing came (unlike in a Colorado investigation of killer robots, for comparison). We haven’t heard from police, for example, whether the CEO-driven Tesla fraud campaign is in trouble for his role in this crash. Could his buggy, oversold software have done a better job here than the human… at the thing he repeatedly promises: collision avoidance?

Fortunately the NHTSA now has a Standing General Order (SGO) on Crash Reporting that can be easily correlated by researchers with these dry police reports. More to the point, the SGO helps us assess future dangers of various driver “assistance” entities.

Entities named in the General Order must report a crash if Level 2 ADAS was in use at any time within 30 seconds of the crash and the crash involves a vulnerable road user or results in a fatality, a vehicle tow-away, an air bag deployment, or any individual being transported to a hospital for medical treatment.

That 30 seconds reference is because Tesla tried to cheat by disabling their technology at the very last second so they could falsely blame humans for computer failures.

Tesla seems to be blaming drivers despite, at least in these sixteen high-profile emergency vehicle crashes—in which Teslas rammed into stopped emergency vehicles alongside roadways or in active lanes, incidents NHTSA found on average would have been identifiable by a human up to 8 seconds ahead of time—Autopilot was running but then shut off just a second before the impact.

Super evil stuff.

It’s hard for me to believe people still buy this brand after its founder was pushed out for being concerned about safety. Let me say that again, Tesla management appears to be intentionally cooking data to avoid accountability when their product failed.

You can not trust anything Tesla says about Tesla logs. Assume they are the Enron of automobile data.

What they do is far, far worse stuff than VW “dieselgate” since people are literally dying from their coverups.

At this point NHTSA might as well call out a Tesla Order, given the obvious sore thumb among all the other carmakers.

Source: NHTSA SGO May 2023

Is it volume? Nope.

Nissan claims nearly 600,000 vehicles operating its Level 2 ADAS “ProPilot Assist” yet it looks like it reported only two crashes, to put it another way. Volvo similarly has a high volume of cars, which shows up as just one crash.

Tesla has design flaws causing a crash frequency that puts it in a zone all its own, at over 700 and rapidly increasing. The more people use their products the more people die or are harmed, unlike any other car.

Garbage engineering rushed to market puts it mildly, made worse by a CEO who fraudulently promotes low quality software as if it’s magic fairy dust. His decision recently to personally screen every single Tesla employee for loyalty to him (e.g. the infamous moral dilemma of a Hitler Oath, for those who know history) indicates he is becoming desperate to hide failure.

[Creating a fear and harm culture where staff] excuse themselves from any personal responsibility for the unspeakable crimes which they carried out on the orders of the Supreme Commander whose true nature they had seen for themselves. […] Later and often, by honoring their oath they dishonored themselves as human beings and trod in the mud the moral code…

It is a grave mistake (pun not intended) to take any job at Tesla, as German autoworkers surely know well. Their country, unlike America, requires them to study the anti-Semites who caused the Holocaust and how they took over media to spread hate.

American autoworkers and their children in 1941 protest Ford’s relationship with Hitler. Source: Wayne State

Anyway, back to this tragic Corona crash reported in April to NHTSA, a Tesla owner clearly didn’t get the memo about Tesla safety failures and disinformation as his car drove towards the wide open Rimpau and Foothill intersection.

Source: Google StreetView

So here are the SGO details just released, which show the destroyed Tesla was brand new and traveling at only 47mph… like entirely new just on the road for the first time at 10PM (Dark – Lighted) and conclusively relying on ADAS.

  • Model Year: 2023
  • Mileage: 590
  • Date: March 2023
  • Time: 05:15 (10:15 PM local time)
  • Posted Speed Limit: 45
  • Precrash Speed: 47

That 590 mile odometer really grabbed me. It sounds like very compelling proof that Tesla safety claims, like their customers, are dead. And the speed was below 50mph, which in a wide open intersection should have significantly lessened stopping distance, reaction time and thus risk of harm.

It’s worth further research to understand why the newest Tesla ADAS at moderate speed, the most recent and theoretically improved product, again didn’t avoid a fatal crash. Given how it has been advertised as collision avoidance supposedly improving over time, the advertising isn’t even close to product ability.

The data even suggests an older Tesla could be less likely to fatally crash than any new one (e.g. Mobileye and NVidia supplied Tesla superior engineering, yet both walked away over serious ethics concerns).

Tesla technology and management of risk to society seems to be only getting worse, crashing in ways that weren’t a problem five or more years ago.

We see far too many brand new Tesla crash almost immediately like this, for example, straight from factory to junkyard as if they are headed towards being least safest thing on the road. The CEO talks about his product like it will improve, yet it declines, as real world experiences suggests he is selling little more than overpriced coffins.

On a related note, the top five recent ADAS fatalities by speed (MPH) were all… Teslas flagrantly ignoring road signs and failing to avoid collisions.

  1. Posted Speed: 55; Precrash: 97 into another car
  2. Posted Speed: 45; Precrash: 97 into motorcycle
  3. Posted Speed: 55; Precrash: 96 into fixed object
  4. Posted Speed: 65; Precrash: 96 into fixed object
  5. Posted Speed: 55; Precrash: 90 into another car

How “auto” can any “pilot” ever be, given it clocks in over 700 catastrophic errors such as these?

Would any actual pilot, human or not, be allowed to pilot for even one more minute given fatal crashes into things at this frequent pace? The fact that software can be used to kill and then kill again, and again hundreds of times, is truly diabolical.

And it is a strange twist to history, albeit a repeat, that a Ford just destroyed what arguably is a fascist machine.

Is it a moral act to destroy or disable Tesla before its unsafe ADAS will predictably crash at high speed into innocent people?

Did the F-150 save lives by taking this Tesla off the road? Who in your neighborhood is equipped and prepared to stop what could be seen as a VBED edition of the poorly-engineered Nazi V1? Floridians, looking at you.

Florida Passes Bill to Protect Billionaires if Their Exploding Rockets Kill People

That’s a “lives don’t matter” bill, rooted in pre-Civil War thinking.

After years of Tesla deaths rapidly increasing, the real elephant in the room is… what if Tesla in fact intends to automate organized racist violent criminal acts like when Henry Ford had encouraged and helped Adolf Hitler?

What if Tesla conspires with large foreign financial backers to facilitate automated deaths for profit every minute it is “free” to apocalyptically continue “learning”; a model setup to get away with more harms to Americans?

Perhaps driving a F-150 has just taken on a whole new meaning.

Tesla eXemplifies Billionaire Misconduct (XBM)

IBM undeniably ran the German Nazi death camps. Their instrumental role was cemented, literally, in places like Dachau.

I’ve been to plantations. I’ve been inside of execution chambers. I’ve walked the halls of death row. I’ve been to a lot of places where death and violence are, and have been, enacted on people. But I’ve never experienced the chill in my body and in my spirit that I did when I was walking through the gas chamber at Dachau. I was startled by how deeply I felt it in my body, how deeply unsettled I felt in my spirit. And then you realize how recent it was. This was less than 80 years ago.

IBM’s man Watson was responsible.

Do you know what Watson stood for?

You’d think as a result of his role that the name would be sullied and untouchable, even if IBM successfully knit a “technology is neutral” narrative to avoid accepting guilt from directly facilitating and expanding genocide.

Ford undeniably inspired the German Nazi industrialization of genocide, which led to demand for IBM’s help.

Ford’s man Ford was responsible.

American autoworkers and their children in 1941 protest Ford’s long and close relationship with Hitler. Source: Wayne State

Again you’d think as a result of his role that the name would be sullied and untouchable.

Ford literally was cited by Hitler many times and in many ways as his inspiration for race-based state-led violence.

In both cases — Watson and Ford — we should ask did the misconduct of such wealthy men lead to any real justice for their victims?

Let’s just say… IBM has gleefully and cruelly promoted their “artificial intelligence” (AI) product billed to save the world as:

Watson.

IBM announced WatsonX, an all-in-one artificial intelligence building tool for enterprises.

The X stands for amorphous, unaccountable, irresponsible.

Why not name it Hitler and get straight to the point? I jest, of course. But not really.

A recent attempt to use the dangeously-named Watson AI in a hospital setting had to be unplugged when it tried to kill (simulated) patients. Apparently these overseeing doctors weren’t profiting from death, a devastating blow to IBM’s historic sales model.

A good doctor sees the patient, not the symptoms. Watson saw the symptoms of inefficiency and lack of capability. It did not see the process of care and making whole, where doctors, not data, were what needed to be understood.

Watson saw symptoms of inefficiency… should be words engraved into the memorials at Nazi death camps that ran on IBM.

Wait, what?

Are you surprised that even the latest and greatest IBM machines that “learn” something, their best attempts at “intelligence”, were actually trying to kill the (simulated) patients that they promised to help?

… suggested a cancer patient with severe bleeding be given a drug that could cause the bleeding to worsen …

That drive to do the wrong thing was quite a big part of how Watson (let alone Hitler) became so suddenly wealthy, right?

Machines “know” this kind of detail in history, without understanding it. Even when humans say they don’t know (ignorant of their own history), machines parse easily the fact that Watson worked for Hitler on genocide as an “efficiency” problem.

Perhaps now you see why I write about technology history. This stuff helps predict future failures. Like the 1960s tragedy of Operation IGLOO WHITE (a billion/year U.S. foreign and domestic drone surveillance project that never worked).

Gaining a foundation of knowledge on what’s behind some American billionaire success (even just these two men out of hundreds or thousands) should give serious pause. Who thinks Tesla ever will do anything about fixing or changing its problems that have been rapidly killing so many people, related directly to billionaire profits?

Source: Tesladeaths.com

Will the latest obnoxious American billionaire, known for spreading toxic lies like it’s 1933 again, ever be held to account? Allegedly he is using Twitter right now to argue a swastika tattoo doesn’t prove someone is a Nazi. What a guy.

Watson, Ford and of course Stanford (sorry, couldn’t leave one of the worst men in history out) likely would say no, accountability never came for them.

I’ll bet that most Americans would say they’ve never heard about such billionaire misconduct before, even though it is widespread and core to their political history… a “Sage” lesson, if you will.

Margaret Olivia Sage

Show me an AI project named after Sage and I’ll show you someone who isn’t ignorant of the risks ahead.

Why Open-Source AI is Faster, Safer and More Intelligent than Google or OpenAI

A “moat” historically meant a physical method to reduce threats to a group intended to fit inside it. Take for example the large Buhen fortress on the banks of the Nile. Built by Pharaoh Senwosret III around 1860 BCE, it boasted a high-tech ten meter high wall next to a three meter deep moat to protect his markets against Nubians who were brave enough to fight against occupation and exploitation.

Hieroglyphics roughly translated: “Just so you know, past this point sits the guy who killed your men, enslaved your women and children, burnt your crops and poisoned your wells. Still coming?”

Egyptian Boundary Stele of Senwosret III, ca. 1878-1840 B.C., Middle Kingdom. Quartzite; H. 160 cm; W. 96 cm. On loan to Metropolitan Museum of Art, New York (MK.005). http://www.metmuseum.org/Collections/search-the-collections/591230

Complicated, I suppose, since being safe inside such a moat meant protection against threats, yet being outside was defined as being a threat.

Go inside and lose freedom, go outside and lose even more? Sounds like Facebook’s profit model can be traced back to stone tablets.

Anyway, in true Silicon Valley fashion of ignoring complex human science, technology companies have been expecting to survive an inherent inability to scale by relying on building primitive “moats” to prevent groups inside from escaping to more freedom.

Basically moats used to be defined as physically protecting markets from raids, and lately have been redefined as protecting online raiders from markets. “Digital moats” are framed for investors as a means to undermine market safety — profit from users enticed inside who then are denied any real option to exit outside.

Unregulated highly-centralized proprietary technology brands have modeled themselves as a rise of unrepresentative digital Pharoahs who are shamelessly attempting new forms of indentured servitude despite everything in history saying BAD IDEA, VERY BAD.

Now for some breaking news:

Google has been exposed by an alleged internal panic memo about profitability of future servitude, admitting “We Have No Moat, And Neither Does OpenAI

While we’ve been squabbling, a third faction has been quietly eating our lunch. I’m talking, of course, about open source. Plainly put, they are lapping us. Things we consider “major open problems” are solved and in people’s hands today. […] Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months.

One week! Stunning pace of improvement. https://lmsys.org/blog/2023-03-30-vicuna/

It’s absolutely clear this worry and fret from Google insiders comes down to several key issues. The following paragraph in particular caught my attention since it feels like I’ve been harping about this for at least a decade already:

Data quality scales better than data size
Many of these projects are saving time by training on small, highly curated datasets. This suggests there is some flexibility in data scaling laws. The existence of such datasets follows from the line of thinking in Data Doesn’t Do What You Think, and they are rapidly becoming the standard way to do training outside Google.

There has to be common sense about this. Anyone who thinks about thinking (let alone writing code) knows a minor change is more sustainable for scale than complete restarts. The final analysis is that learning improvements grow bigger faster and better through fine-tuning/stacking on low-cost consumer machines instead of completely rebuilding upon each change using giant industrial engines.

…the model can be cheaply kept up to date, without ever having to pay the cost of a full run.

You can scale a market of ideas better through a system designed for distributed linked knowledge with safety mechanisms, rather than planning to build a new castle wall every time a stall is changed or new one opened.

Building centralized “data lakes” was a hot profit ticket in 2012 that blew-up spectacularly just a few years later. I don’t think people realized social science theory like “fog of war” had told them not to do it, but they definitely should have walked away from “largest” thinking right then.

Instead?

OpenAI was born in 2015 on the sunset phase of a wrong model mindset. Fun fact: I once was approached and asked to be CISO for OpenAI. Guess why I immediately refused and instead went to work on massively distributed high-integrity models of data for AI (e.g. W3C Solid)?

…maintaining some of the largest models on the planet actually puts us at a disadvantage.

Yup. Basically confidentiality failures that California breach law SB1386 hinted at way back in 2003, let alone more recent attempts to stop integrity failures.

Tech giants have vowed many times to combat propaganda around elections, fake news about the COVID-19 vaccines, pornography and child exploitation, and hateful messaging targeting ethnic groups. But they have been unsuccessful, research and news events show.

Bad Pharaohs.

Can’t trust them, as the philosopher David Hume sagely warned in the 1700s.

To me the Google memo reads as if pulled out of a dusty folder: an old IBM fret that open communities running on Sun Microsystems (get it? a MICRO system) using wide-area networks to keep knowledge cheaply up to date… will be a problem for mainframe profitability that depends on monopoly-like exit barriers.

Exiting times, in other words, to be working with open source and standards to set people free. Oops, meant to say exciting times.

Same as it’s ever been.

There is often an assumption that operations should be large and centralized in order to be scalable, even though such thinking is provably backwards.

I suspect many try to justify such centralization due to cognitive bias, not to mention hedging benefits away from a community and into a just small number of hands.

People sooth fears through promotion of competition-driven reductions; simple “quick win” models (primarily helping themselves) are hitched to a stated need for defense, without transparency. They don’t expend effort on wiser, longer-term yet sustainable efforts of more interoperable, diverse and complex models that could account for wider benefits.

The latter models actually scale, while the former models give an impression of scale until they can’t.

What the former models do when struggling to scale is something perhaps right out of ancient history. Like what happens when a market outgrows the slowly-built stone walls of a “protective” monarchist’s control.

Pharoahs are history for a reason.

The AI Trust Problem Isn’t Fakes. The AI Trust Problem Is Fakes.

See what I did there? It worries me that too many people are forgetting almost nobody really has been able to tell what is true since… forever. I gave a great example of this in 2020: Abraham Lincoln.

A print of abolitionist U.S. President Abraham Lincoln was in fact a composite, a fake. Thomas Hicks had placed Lincoln’s unmistakable head on the distinguishable body of Andrew Jackson’s rabidly pro-slavery Vice President John Calhoun. A very intentionally political act.

The fakery went quietly along until Stefan Lorant, art director for London Picture Post magazine, noticed a very obvious key to unlock Hick’s puzzle — Lincoln’s mole was on the wrong side of his face.

Here’s a story about Gary Marcus, a renowned AI expert, basically ignoring all the context in the world:

His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”

Will be flooded?

That is literally what the Internet has done since its origin. The printing press flooded the average person. The radio flooded the average person. It’s not like the Internet, in true reinvention of the wheel fashion, grew in popularity because it came with an inherent truth filter.

The opposite, bad packets were always there for bad actors and — despite a huge amount of money invested for decades into “defenses” — many bad packets continue to flow.

Markets always are dangerous, deceptive places if left without systems of trust formed with morality (as philosopher David Hume explained rather clearly in the 1700s, perhaps too clearly given the church then chastised him for being a thinker/non-believer).

“Magritte’s original title of this painting was L’usage de la parole I (The Use of Speech I), which implies that we should question the veracity of the words, not the image.” Source: BBC

So where does our confidence and ability to move forward stem from? Starting a garden (pun not intended) requires a plan to assess and address, curate if you will, risky growth. We put speed limiters into use to ensure pivots, such as making a turn or changing lanes, won’t be terminal.

Historians, let alone philosophers and anthropologists, might even argue that having troubles with the truth has been the human condition across all communication methods for centuries if not longer. Does my “professor of the year” blog post from 2008 or the social construction of reality ring any bells?

Americans really should know exactly what to do, given it has such a long history of regulating speech with lots of censorship; from the awful gag rule to preserve slavery, or banning Black Americans from viewing Birth of a Nation, all the way to the cleverly named WWII Office of Censorship.

What’s that? You’ve never heard of the U.S. Office of Censorship, or read its important report from 1945 saying Americans are far better off because of their work?

This is me unsurprised. Let me put it another way. Study history when you want to curate a better future, especially if you want to regulate AI.

Not only study history to understand the true source of troubles brewing now, growing worse by the day… but also to learn where and how to find truth in an ocean of lies generated by flagrant charlatans (e.g. Tesla wouldn’t exist without fraud, as I presented in 2016).

If more people studied history for more time we could worry less about the general public having skills in finding truth. Elon Musk probably would be in jail. Sadly the number of people getting history degrees has been in decline, while the number of people killed by a Tesla skyrockets. Already 19 dead from Elon Musk spreading flagrant untruths about AI. See the problem?

The average person doesn’t know what is true, but they know who they trust; a resulting power distribution is known by them almost instinctively. They follow some path of ascertaining truism through family, groups, associations, “celebrity” etc. that provide them a sense of safety even when truth is absent. And few (in America especially) are encouraged to steep themselves in the kinds of thinking that break away from easy, routine and minimal judgment contexts.

Just one example of historians at work is a new book about finding truth in the average person’s sea of lies, called Myth America. It was sent to me by very prominent historians talking about how little everyone really knows right now, exposing truths against some very popular American falsehoods.

This book is great.

Yet who will have the time and mindset to read it calmly and ponder the future deeply when they’re just trying to earn enough to feed their kids and cover rising medical bills to stay out of debtor prison?

Also books are old technology so they are read with heaps of skepticism. People start by wondering whether to trust the authors, the editors and so forth. AI, as with any disruptive technology in history, throws that askew and strains power dynamics (why do you think printing presses were burned by 1830s American cancel culture?).

People carry bias into their uncertainty, which predictably disarms certain forms of caution/resistance during a disruption caused by new technology. They want to believe in something, swimming towards a newly fabricated reality and grasping onto things that appear to float.

It is similar to Advanced Fee Fraud working so well with email these days instead of paper letters. An attacker falsely promises great rewards later, a pitch about safety, if the target reader is willing (greedy) to believe in some immediate lies coming through their computer screen.

Thus the danger is not just in falsehoods, which surround us all the time our whole lives, but how old falsehoods get replaced with new falsehoods through a disruptive new medium of delivery: fear during rapid changes to the falsehoods believed.

What do you mean boys can’t wear pink, given it was a military tradition for decades? Who says girls aren’t good at computers when they literally invented programming and led the hardware and software teams where quality mattered most (e.g. Bletchly Park was over 60% women)?

This is best understood as a power shift process that invites radical even abrupt breaks depending on who tries to gain control over society, who can amass power and how!

Knowledge is poweful stuff; creation and curation of what people “know” is often thus political. How dare you prove the world is not flat, undermining the authority of those who told people untruths?

AI can very rapidly rotate on falsehoods like the world being flat, replacing known and stable ones with some new and very unstable, dangerous untruths. Much of this is like the stuff we should all study from way back in the late 1700s.

It’s exactly the kind of automated information explosion the world experienced during industrialization, leading people eventually into world wars. Here’s a falsehood that a lot of people believed as an example: fascism.

Old falsehoods during industrialization fell away (e.g. a strong man is a worker who doesn’t need breaks and healthcare) and were replaced with new falsehoods (e.g. a strong machine is a worker that doesn’t need strict quality control, careful human oversight and very narrow purpose).

The uncertainty of sudden changes in who or what to believe next (power) very clearly scares people, especially in environments unprepared to handle surges of discontent when falsehoods or even truths rotate.

Inability to address political discontent (whether based in things false or true) is why the French experienced a violent disruptive revolution yet Germany and England did not.

That’s why the more fundamental problem is how Americans can immediately develop methods for reaching a middle ground as a favored political position on technology, instead of only a left and right (divisive terms from the explosive French Revolution).

New falsehoods need new leadership through a deliberative and thoughtful process of change, handling the ever-present falsehoods people depend upon for a feeling of safety.

Without the U.S. political climate forming a strong alliance, something that can hold a middle ground, AI can easily accelerate polarization that historically presages a slide into hot war to handle transitions — political contests won by other means.

Right, Shakespeare?

The poet describes a relationship built on mutual deception that deceives neither party: the mistress claims constancy and the poet claims youth.

When my love swears that she is made of truth
I do believe her though I know she lies,
That she might think me some untutored youth,
Unlearnèd in the world’s false subtleties.
Thus vainly thinking that she thinks me young,
Although she knows my days are past the best,
Simply I credit her false-speaking tongue;
On both sides thus is simple truth suppressed.
But wherefore says she not she is unjust?
And wherefore say not I that I am old?
O, love’s best habit is in seeming trust,
And age in love loves not to have years told.
Therefore I lie with her and she with me,
And in our faults by lies we flattered be.