Tesla AI is Killing Its Users. Does Anyone Care?

Earlier this year an important AI “canary” warning was posted on The Atlantic.

…think of the Bumble founder’s bubbly sales pitch as a canary in the coal mine, a harbinger of a world of algorithms that leave people struggling to be people without assistance. The new AI products coming to market are gate-crashing spheres of activity that were previously the sole province of human beings. Responding to these often disturbing developments requires a principled way of disentangling uses of AI that are legitimately beneficial and prosocial from those that threaten to atrophy our life skills and independence.

One rapid “atrophy” of life skills, marking the abrupt end of independence, has been when Tesla AI has a “veered” crash into a tree.

A shocking number of people, relying on obviously flawed AI by Tesla called “driverless” and unaware of the threat from such unregulated technology, have been killed by it.

Worse than Frankenstein with a flamethrower.

Stepping into this “vision” and giving it control is like choosing to live in a ruthless dictatorship that strips you of your freedom while promising you utopia. By the time people realize their mistake they literally are trapped alive and burned to death by Tesla.

Users do not fully understand the limitations and dangers of these systems, especially when marketing by a CEO emphasizes convenience and benefits while downplaying risks. Rapid technological shifts allow immoral profits from outpacing our ability to develop appropriate safety measures and social adaptations.

Just as we wouldn’t accept a pharmaceutical CEO rushing untested drugs to market dismissing warnings, should we accept AI companies deploying potentially lethal autonomous systems without even basic safety validation? These systems fail catastrophically in ways that leave users powerless at the moment of crisis, placing huge tragedy and burden on society due to the greed of a few snake-oil salesmen.

When AI systems fail catastrophically, it’s not just individual users who suffer, there’s a massive ripple effect:

  • Families devastated by preventable deaths
  • Emergency responders traumatized by horrific accidents
  • Healthcare systems bearing costs of injuries
  • Communities losing trust in technology
  • Taxpayers funding emergency responses and investigations

And yet Tesla continues to kill owners, as well as many people around them, without accountability.

The continuing deployment of unsafe AI systems not just a consumer protection issue but a public safety crisis. Companies are essentially watching Tesla conduct involuntary experiments on both their customers and the general public.

Thus ripple effects extend even further when considering long-term impacts:

  • Loss of public trust might delay adoption of genuinely beneficial AI technologies
  • Setting dangerous precedents for other companies rushing unsafe AI to market
  • Creating a cynical “move fast and break things” culture where human deaths are seen as acceptable collateral damage

Shouldn’t there be criminal liability for executives who knowingly deploy unsafe AI systems that result in deaths? It seems to meet traditional criteria for criminal negligence or even manslaughter in other contexts. The fact that Tesla deaths are AI-related shouldn’t provide special immunity from consequences.

The “dictatorship” metaphor becomes even more apt given the Tesla CEO has laid claim to be the “real” President of America now.

Elon Musk celebrates buying the Trump family, to turn the White House into a apartheid-like monarchy controlled by white male billionaires.

Authoritarian leaders, such as Elon Musk’s family rule in South African apartheid, claim absolute authority while disclaiming responsibility for the human costs of their decisions.

  • Historical comfort with systemic inequality
  • Belief in unaccountable rule by wealthy elites
  • Dismissal of human costs as necessary sacrifice
  • Using technology as a tool of control
  • Resistance to democratic oversight

They’re effectively seducing and tricking society to participate in dangerous experiments without consent, while insulating themselves from consequences.

Canadian police arrested Elon Musk’s racist “Technocracy” politician grandfather as a threat to democracy. Fascism wasn’t tolerated during WWII. He then fled to South Africa to build apartheid. Source: The Leader-Post, Regina, Saskatchewan, Canada, Tue, Oct 8, 1940, Page 16
Source: Twitter

Should we be looking at establishing something like a “corporate power off button” (e.g. CardSystems, CodeSpaces) where companies that repeatedly deploy unsafe (i.e. lethal AI) systems have it disabled by a higher authority — lose the right to operate in that space entirely?

  • Nuclear facilities have emergency shutdown systems
  • Airlines can be grounded by aviation authorities
  • Pharmaceutical products can be immediately pulled from market
  • Food producers can be shut down for safety violations

A historical pattern makes the case for a “power off button” even stronger. Just as fascist movements required decisive intervention, shouldn’t we have similar emergency powers to stop AI systems that demonstrate a pattern of lethal failures? The hard lessons about the dangers of unaccountable power over public safety, canaries dying in the dozens, shouldn’t be ignored.

This promise of convenient technology has a terrible Musk, one that intends to mask wide scale surrender of fundamental rights and safeguards.

How many more canaries need to die in the open before society recognizes this isn’t just about faulty technology, but about a dangerous ideology using AI to circumvent democratic safeguards?

AI is killing Tesla owners. Does anyone care?

OR Tesla Kills One

The CEO of Tesla has infamously and literally encouraged people to sleep while driving, which is extremely dangerous and does not make any sense.

OSP officials said that they responded at 4 a.m. on December 21 to a single-vehicle crash on Interstate 5 near milepost 101 in Douglas County. A southbound Tesla Model Y driven by a 38-year-old Redmond, Washington man, left the roadway and struck a guardrail after the driver reportedly fell asleep, according to state police officials. Authorities said that five occupants were transported to an area hospital where a 69-year-old passenger, identified as Rongfang Yang, later died.

The Safety Test Truth About Tesla Death Traps

Recent analysis reveals a dangerous loophole: no Tesla vehicle meeting U.S. specifications has undergone safety or crash testing by the National Highway Traffic Safety Administration (NHTSA) or independent organizations since the company’s 2022 removal of ultrasonic sensors.

The significant testing gap initially was exploited during 2016-2021 when NHTSA safety analysis of Tesla was blocked by the Trump family in the Whitehouse. Obvious safety regulation interference and corruption, combined with the 2022 sensor removal, has created an extended period without comprehensive third-party safety verification.

Technical analysis of the resulting poorly-concieved Tesla camera-only safety approach has raised concerns among experts. The system’s 1280×960 resolution cameras, running at less than 1.2 megapixels in an era where practically disposable smartphones routinely capture 48+ megapixels, suffer from dynamic range capabilities lower than consumer-grade webcams. Common sense suggests such limited resolution by design, less than a third of high-definition quality (1920×1080), severely constrains the system’s ability to detect and respond to any hazards, let alone daily challenging lighting conditions or inclement weather.

Data has already pointed towards significant safety implications, explaining the many deaths. Recent studies show the Tesla Model Y has recorded the highest fatality rate among all vehicle models, with accident rates documented at twice the industry standard. The Model 3 similarly shows notably extremely elevated fatality rates, almost off the chart, compared to many other car brands.

Source: IIHS

Automotive safety experts have identified parallels between Tesla’s current safety architecture and documented historical cases, including structural vulnerabilities reminiscent of the 1970 Ford Pinto.

We have seen a number of crashes involving Tesla vehicles where occupants survive the trauma of the crash but were unable to escape the vehicles because of the electronic door latches that are no longer operational.

These compounded factors of the testing gap, reliance on severely outdated camera technology, elevated fatality rates, and emergency egress concerns, demand intensified scrutiny from safety advocates and regulatory authorities. No wonder the notoriously corrupt Trump family is again offering Tesla a deadly loophole for money.

Source: Twitter

We’re seeing a perfect storm of preventable dangers:

  • Deliberately inadequate sensors
  • Evasion of safety testing
  • Known fatal design flaws
  • Political protection of deadly practices

The comparison to the Ford Pinto is particularly apt as both cases involve:

  • Known safety defects
  • Corporate decisions prioritizing profits over lives
  • Regulatory failures
  • Preventable deaths

The key difference is scale: Tesla’s AI multiplies the danger across its entire fleet simultaneously. This makes the case even more urgent. We’re not just clocking individual mechanical failures but systemwide lethal AI behaviors.

Tesla represents a new level of corporate negligence, where inadequate technology is deliberately deployed with political protection to maximize profits despite known fatal consequences.

Chemists Use AI to Decode and Preserve Berlin Wall

Here’s a novel approach to preserving the wall everyone wanted to tear down.

The team first examined 15 paint chips and observed that all had a maximum of two or three layers of acrylic paint – brushstrokes as opposed to spray paint. Then, they used Raman spectroscopy to characterise the chips and identified titanium white, azopigments (yellow and red chips) and lead chromate (green) as the primary pigments present.

By mixing common, commercial paint with titanium white the scientists quantified the dye dilution in mixtures used by the artists. They trained a machine learning algorithm to predict the ratio between the pigments using Raman data at various pigment concentrations. From the dataset, they extracted wavenumber values as features while concentration values served as labels. Thus, a custom neural network was built for regression tasks, predicting pigment concentrations. It was discovered that the paint chips contained titanium white and up to 75% pigments, depending on the piece of wall from which it was sourced. The results rivalled those achieved using laboratory equipment.

Apparently the Stasi were lax in recording paint methods.