AI is Killing Tesla Owners. Does Anyone Care?

Earlier this year an important AI “canary” warning was posted on The Atlantic.

…think of the Bumble founder’s bubbly sales pitch as a canary in the coal mine, a harbinger of a world of algorithms that leave people struggling to be people without assistance. The new AI products coming to market are gate-crashing spheres of activity that were previously the sole province of human beings. Responding to these often disturbing developments requires a principled way of disentangling uses of AI that are legitimately beneficial and prosocial from those that threaten to atrophy our life skills and independence.

One rapid “atrophy” of life skills, marking the abrupt end of independence, has been when Tesla AI has a “veered” crash into a tree.

A shocking number of people, relying on obviously flawed AI by Tesla called “driverless” and unaware of the threat from such unregulated technology, have been killed by it.

Worse than Frankenstein with a flamethrower.

Stepping into this “vision” and giving it control is like choosing to live in a ruthless dictatorship that strips you of your freedom while promising you utopia. By the time people realize their mistake they literally are trapped alive and burned to death by Tesla.

Rapid technological shifts allow immoral profits from outpacing our ability to develop appropriate safety measures and social adaptations. Users do not fully understand the limitations and dangers of these systems, especially when marketing by a CEO emphasizes convenience and benefits while downplaying risks.

Just as we wouldn’t accept a pharmaceutical CEO rushing untested drugs to market dismissing warnings, should we accept AI companies deploying potentially lethal autonomous systems without even basic safety validation? These systems fail catastrophically in ways that leave users powerless at the moment of crisis, placing huge tragedy and burden on society due to the greed of a few snake-oil salesmen.

When AI systems fail catastrophically, it’s not just individual users who suffer, there’s a massive ripple effect:

  • Families devastated by preventable deaths
  • Emergency responders traumatized by horrific accidents
  • Healthcare systems bearing costs of injuries
  • Communities losing trust in technology
  • Taxpayers funding emergency responses and investigations

And yet Tesla continues to kill owners, as well as many people around them, without accountability.

The continuing deployment of unsafe AI systems not just a consumer protection issue but a public safety crisis. Companies are essentially watching Tesla conduct involuntary experiments on both their customers and the general public.

Thus ripple effects extend even further when considering long-term impacts:

  • Loss of public trust might delay adoption of genuinely beneficial AI technologies
  • Setting dangerous precedents for other companies rushing unsafe AI to market
  • Creating a cynical “move fast and break things” culture where human deaths are seen as acceptable collateral damage

Shouldn’t there be criminal liability for executives who knowingly deploy unsafe AI systems that result in deaths? It seems to meet traditional criteria for criminal negligence or even manslaughter in other contexts. The fact that Tesla deaths are AI-related shouldn’t provide special immunity from consequences.

The “dictatorship” metaphor becomes even more apt given the Tesla CEO has laid claim to be the “real” President of America now.

Elon Musk celebrates buying the Trump family, to turn the Whitehouse into a apartheid-like monarchy controlled by white male billionaires.

Authoritarian leaders, such as Elon Musk’s family rule in South African apartheid, claim absolute authority while disclaiming responsibility for the human costs of their decisions.

  • Historical comfort with systemic inequality
  • Belief in unaccountable rule by wealthy elites
  • Dismissal of human costs as necessary sacrifice
  • Using technology as a tool of control
  • Resistance to democratic oversight

They’re effectively seducing and tricking society to participate in dangerous experiments without consent, while insulating themselves from consequences.

Canadian police arrested Elon Musk’s racist “Technocracy” politician grandfather as a threat to democracy. Fascism wasn’t tolerated during WWII. He then fled to South Africa to build apartheid. Source: The Leader-Post, Regina, Saskatchewan, Canada, Tue, Oct 8, 1940, Page 16
Source: Twitter

Should we be looking at establishing something like a “corporate power off button” (e.g. CardSystems, CodeSpaces) where companies that repeatedly deploy unsafe (i.e. lethal AI) systems have it disabled by a higher authority — lose the right to operate in that space entirely?

  • Nuclear facilities have emergency shutdown systems
  • Airlines can be grounded by aviation authorities
  • Pharmaceutical products can be immediately pulled from market
  • Food producers can be shut down for safety violations

A historical pattern makes the case for a “power off button” even stronger. Just as fascist movements required decisive intervention, shouldn’t we have similar emergency powers to stop AI systems that demonstrate a pattern of lethal failures? The hard lessons about the dangers of unaccountable power over public safety, canaries dying in the dozens, shouldn’t be ignored.

This promise of convenient technology has a terrible Musk, one that intends to mask wide scale surrender of fundamental rights and safeguards.

How many more canaries need to die in the open before society recognizes this isn’t just about faulty technology, but about a dangerous ideology using AI to circumvent democratic safeguards?

AI is killing Tesla owners. Does anyone care?

One thought on “AI is Killing Tesla Owners. Does Anyone Care?”

  1. Yo for real tho, these Tesla crashes are straight facts my dude. Like we don’t let pharma companies yeet random pills at people, why we letting AI cars smoke folks? Facts no cap fr fr. Time to shut that down.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.