Category Archives: Security

FL Tesla Kills One Motorcyclist

Many motorcyclists have died from sudden impact with the rear of a Tesla. It’s another example of Tesla tragedy that begs the capability of their sensor system to react intelligently to common traffic.

According to the Florida Highway Patrol, the 47-year-old man was driving a Harley Davidson motorcycle northbound in the outside lane of Interstate 275 just north of milepost 43. Around 2:40 p.m., police said the motorcyclist drove across the [triangular lane merge buffer] gore at an entrance ramp and collided with the back of a Tesla Model 3 that was traveling next to it.

While crossing a gore area is technically improper it’s a relatively common quick exit maneuver made by motorcyclists, especially Harleys, that want to avoid dangerous merging with drivers’ blindspots. Little do they expect a Tesla to react in a dangerously and unpredictably inhuman way. The key expert questions here are:

1. Did Tesla’s systems detect the approaching motorcycle?

2. If so, did the car initiate sudden braking in response?

3. If there was sudden braking, was it an appropriate response or a dangerous overreaction by Tesla’s systems causing a deadly crash?

AZ Tesla Kills Two Pedestrians

There seems to be a notable increase lately in pedestrians killed by Tesla. Is the car getting worse at identifying humans?

A father and son were killed after being struck by a Tesla while refueling their truck on Loop 202 Friday evening, according to the Arizona Department of Public Safety.

DPS spokesperson Bart Graves reported that the incident occurred around 6:30 p.m. on Loop 202, just north of Elliot Road in the Laveen neighborhood of Phoenix.

State Troopers arrived and found two men allegedly struck by a Tesla, whose driver remained on scene.

The men, one in his 70s and the other in his 40s, were pronounced dead at the scene by the Phoenix Fire Department, according to Graves.

It sure isn’t getting better. This is the same Tesla tragedy story that has been reported since at least April 2018.

“I was waving this light like this so they could see it. It’s a bright LED, so I was going like this, and I had the strobe lights and stuff like this, this one blinking and that one blinking like this on top of my van and my brother’s truck had hazard lights on, too,” he said.

The Tesla ignored the law, ignored the hazard signals, ignored the lights.

Notably, Uber was raked over the coals for far less, and shut down their entire driverless program, even sending the driver to court for serious charges after a single death in April 2018.

A tale of two driverless programs?

Somehow Tesla doing worse in safety than the disgraced and cancelled Uber driverless program… means killing more and more pedestrians without any accountability.

Female Ghosts in the Machine: What Wollstonecraft Knew About AI in 1792

The ghosts of female philosophers haunt Silicon Valley’s machines. While tech bros flood Seattle and San Francisco in a race to claim revolutionary breakthroughs in artificial intelligence, the spirit of Mary Wollstonecraft whispers through their fingers, her centuries-old insights about human learning and intelligence echoing unacknowledged through their algorithms and neural networks.

1790 oil on canvas portrait by John Opie of philosopher Mary Wollstonecraft (1759-1797). Source: Tate Britain, London

In “A Vindication of the Rights of Woman” (1792), Wollstonecraft didn’t just argue for women’s education, she dismantled the very mechanical, rote learning systems that modern AI companies are clumsily reinventing at huge cost. Her radical vision of education as an organic, growing system that develops through experience and social interaction reads like a direct critique of today’s rigid, mechanical approaches to artificial intelligence.

The eeriest part? She wrote this devastating critique of mechanical thinking 230 years before transformer models and large language models would prove her right. While today’s AI companies proudly announce their discovery that learning requires social context and organic development, Wollstonecraft’s ghost watches from the margins of history, her vindication as ignored as her original insights.

Notable history tangent? She died from infection eleven days after giving birth to her daughter, who then went on to write Frankenstein in 1818 and basically invent science fiction.

When we look at modern language models learning through massive datasets of human interaction, we’re seeing Wollstonecraft’s philosophic treatises on organic learning scaled to the digital age.

David Hume’s philosophical contributions are also quite striking, given they’re 300-years old as well. His “bundle theory” of mind and identity reads like a prototype for neural networks.

When Hume argued that our ideas are nothing more than collections of simpler impressions connected through association, he was describing something remarkably similar to the weighted connections in modern AI systems. His understanding that belief operates on probability rather than certainty is fundamental to modern machine learning.

Every time an AI system outputs a confidence score, it’s demonstrating Hume’s predictive writing about our modern dependency on empiricism.

What’s particularly fascinating is how both thinkers rejected the clockwork universe model of their contemporaries. They saw human understanding as something messier, more organic, and ultimately more powerful than mere mechanical processes. Wollstonecraft’s insights about how social systems shape individual development are particularly relevant as we grapple with AI alignment and bias. She understood as a philosopher of the 1700s that intelligence, whether natural or artificial, cannot be separated from its social context.

The problem with our 1950s-style flowcharts that emerged from hard-fought victory in WWII isn’t just that they’re oversimplified, it’s that they represent a violent step backward from the sophisticated understanding of mind and learning that Enlightenment thinkers had already developed.

We ended up with such mechanistic models, simplistic implementations like passwords instead of proper messy heatmap authentication, because the industry was funded out of military-industrial contexts that too often prioritized command-and-control thinking over organic developments. TCP/IP and HTTPS were academically-driven exceptions to the Rochester-Stanford teams who fought hard to standardize on X.25, for example.

When Wollstonecraft wrote about the organic development of understanding, or when Hume described the probabilistic nature of belief, they were articulating ideas that would take computer science centuries to rediscover and apply as “novel” concepts divorced from all the evidence presented in social science.

As we develop AI systems that learn from social interaction, operate on probabilistic inference, and exhibit emergent behaviors, we’re not just advancing beyond the simplistic war-focused mechanical models of early computer science, we’re finally catching up to the insights of 18th-century philosophy. Perhaps the real innovation in AI isn’t about technology itself, but our acceptance of a particular woman’s more sophisticated understanding from 1792 of what intelligence really means.

The next frontier in AI not surprisingly won’t be found in more complex algorithms, but in finally embracing the full implications of what Enlightenment thinkers understood about the nature of mind, learning, and society. When we look at the most advanced AI systems today and where they are going with their fuzzy logic, their social learning, their emergent behaviors, we’re seeing the vindication of ideas that Wollstonecraft and Hume would have recognized immediately.

Unfortunately, the AI industry seems dominated by an American “bromance” that isn’t particularly inclined to give anyone credit for the ideas that are being taken, corrupted and falsely claimed as futuristic or even unprecedented. Microsoft summarily fired all their ethicists in an attempt to silence objections to OpenAI investment, not long before a prominent whistleblower about OpenAI turned up dead.

Nothing to see there, I’m sure, as philosophers rotate in their graves. We haven’t just forgotten the lessons of Enlightenment thinkers, the Sam Altmans and Mark Zuckerbergs may be actively resisting them in favor of a more controlled, corporatized, exploitative approaches to innovations with technology.

Let me give you an example of the kind of flawed and ahistoric writing I see lately. Rakesh Gohel posed this question on the proprietary, closed site, ironically called “LinkedIn“:

Most people think AI Agents are just glorified chatbots, but what if I told you they’re the future of digital workforces?

What if?

What if I told you the tick tock of Victorian labor exploitation practices and inhumane colonialism don’t disappear if you just rebrand them to TikTok and use camera phones instead of paper and pen? Just like Victorian factory owners used mechanical timekeeping to control workers, modern platforms use engagement metrics and notification systems to maintain digital control.

The eyeball-grabbing “digital workforce” framing that Gohel stumps is essentially reimagining a factory into APIs instead of steam engines and belts. Just as factory owners reduced skilled craftwork to mechanical processes, today’s AI companies are watering down complex social and cognitive processes into simple flowcharts that foreshadow their dangerous intentions. Gohel tries to sweeten his pitch using a colorful chart, which in fact illustrates just how fundamentally broken “AI influencer” thinking can be about thinking.

That, my fellow engineers, is a tragedy of basic logic. Contrasting a function call with a while loop… is like promoting 1950s-era computer theory at best. A check loop after you plan and do something! What would Deming say about PDCA, given that he was famous 50 years ago for touring the world lecturing on what this brand new chart claims to be the future?

The regression here goes beyond just technical architecture. When Deming introduced PDCA, he wasn’t just describing a feedback loop, he was promoting a holistic philosophy of continuous improvement and worker empowerment. The modern AI agent diagram strips away all of that context and social understanding, reducing it to the worst technical loop theory.

This connects back to the earlier point about Wollstonecraft, because the AI industry isn’t just ignoring 18th-century philosophy, it’s also ignoring 20th-century management science and systems thinking. The “what if” diagram presents as revolutionary what Deming would have considered decades ago a primitive understanding of systematic improvements in intelligence.

Why does the American tech industry keep “rediscovering” and selfishly-corrupting or over-simplifying ideas that were better understood and presented widely decades or centuries ago?

A quick back-of-napkin sketch you likely would never see in the current put-other-peoples-nose-to-grindstone American tech scene

Perhaps it’s because technically raw upwardly mobile privileged skids (TRUMPS) being expected to acknowledge any deep historical roots, such as giving any real credit to humanities or social science, would mean confronting the very harmful implications from their poorly constructed systems… which the world’s best philosophers like Wollstonecraft, Hume, and Deming have emphasized for hundreds of years.

The pattern is painfully clear — exhume a sophisticated philosophical concept, strip it to its mechanical bones, slap a technical name on it, and claim revolutionary insight. Here are just a few examples of AI’s philosophical grave-robbing:

  • “Attention Mechanisms” in AI (2017) rebranded William James’ Theory of Attention (1890). James described consciousness as selectively focusing on certain stimuli while filtering others in a dynamic, context-aware process involving both voluntary and involuntary mechanisms. The tech industry presents transformer attention as revolutionary when it’s implementing a stripped-down version of 130-year-old psychology.
  • “Reinforcement Learning” (2015) rebranded Thorndike’s Law of Effect (1898). Thorndike described how behaviors followed by satisfying consequences tend to be repeated, developing sophisticated theories about the role of context and social factors in learning. Modern RL strips this to pure mechanical reward optimization, losing all nuanced understanding of social and emotional factors.
  • “Federated Learning” (2017) rebranded Kropotkin’s Mutual Aid (1902). Kropotkin described how cooperation and distributed learning occur in nature and society, emphasizing knowledge development through networks of mutual support. The tech industry “discovers” distributed learning networks but focuses only on data privacy and efficiency, ignoring the social and cooperative aspects Kropotkin emphasized.
  • “Explainable AI” (2016) rebranded John Dewey’s Theory of Inquiry (1938). Dewey wrote about how understanding must be socially situated and practically grounded, emphasizing that explanations must be tailored to social context and human needs. Modern XAI treats explanation as a purely technical problem, losing the rich philosophical framework for what makes something truly explainable.
  • “Few-Shot Learning” (2017) rebranded Gestalt Psychology (1920s). Gestalt psychologists described how humans learn from limited examples through pattern recognition and developed sophisticated theories about how minds organize and transfer knowledge. Modern few-shot learning presents this as a novel technical challenge while ignoring deeper understanding of how minds actually organize and transfer knowledge.

These philosophical ghosts don’t just haunt our machines – they’re Wollstonecraft’s vindication made manifest, a warning echoing through centuries of wisdom. The question is whether we’ll finally listen to these voices from the margins of history, or continue pretending every thoughtless mechanical implementation of their ideas is cause to celebrate a breakthrough discovery. Remember, Caty Greene’s invention of the “cotton engine” or ‘gin (following her husband’s untimely death from over-exertion) came from intentions to abolish slavery, yet instead was stolen from her and reverse-patterned into the largest unregulated immoral expansion of slavery in world history. Today’s AI systems risk following the same pattern of automation tools intended to liberate human potential being corrupted into instruments of digital servitude.

Naively uploading” our personal data into any platform that lacks integrated ethical design or safe learning capabilities is more like turning oneself into a slave exploited by cruelty of emergent digital factory owners, rather than maintaining basic freedoms while connecting to a truly intelligent agent that can demonstrate aligned values. Agency is the opposite to what AI companies have deceptively been hawking as their vision of agents.

Gentlemen, You Can’t Dance to a Tesla Light Show: A Cold War Warning on Command & Control

Kubrick’s 1964 film “Dr. Strangelove” presented what seemed an absurdist critique of automation and control systems. While most bombers in the film could be recalled when unauthorized launches occurred, a single damaged bomber’s “CRM 114 discriminator” prevented any override of its automated systems – even in the face of an end-of-world mistake. This selective communication failure, where one critical component could doom humanity while the rest of the system functioned normally, highlighted the kind of dangerous fragility that necessitates tight regulation of automated control systems.

The film’s “discrimination” device, preventing override and sealing the world’s fate, was comical because it was the invention of a character portrayed as a paranoid conspiracy theorist (e.g. a fictional Elon Musk). The idea that a single point of failure in communications could trigger apocalyptic consequences was considered so far-fetched as to be unrealistic in the 1960s. Yet here we are, with Tesla rapidly normalizing paranoid delusional automated override blocks as a valid architectural pattern without any serious security analysis or public scrutiny.

Traditional automakers since the Ford Pinto catastrophe understand design risks intuitively — they build mechanical overrides that can not be software-disabled, showing a fundamental grasp of safety principles that Tesla has glowingly abandoned. In fact, other manufacturers specifically avoid building centralized control capabilities, not because difficulty, but because engineers should always recognize and avoid inherent risks — following the same precautionary principle that guided early nuclear power plant designers to build in physical fail-safes. However, the infamous low-quality high-noise car parts assembly company known as Tesla has apparently willfully recreated the worst architectural vulnerabilities at massive scale that threaten civilian infrastructure.

Most disturbing is how Tesla masks a willful destruction of societal value systems using toddler-level entertainment. The “Light Show” is presented as frivolous and harmless, much like how early computer viruses were dismissed as fun pranks rather than serious security threats that would come to define devastating global harms. But engineers know the show is not just plugging trivial LED audio response code into a car. What it actually demonstrates is a fleet-wide command and control system without sensible circuit breakers. It promotes highly-explosive chemical cluster bombs mindlessly following centrally planned orders without any independent relation to context or consequences. It turns a fleet of 1,000 Tesla into automation warfare concepts reminiscent not just of the Gatling gun or the Chivers machine gun of African colonialism, but the Nazi V-1 rocket program of WWII — a clear case of automated explosives meant to operate in urban environments that couldn’t be recalled once launched.

Finland 1940:

Threat? What threat? Soviet Foreign Minister Vyacheslav Molotov said he’s just airlifting food into Finland (Molotov’s “bread basket” technology — leipäkori — was in fact a cluster bomb. And yes, Finland was so anti-Semitic their air-force really adopted the hooked-X for their symbol. REALLY!)
26 Jan 1940: “…the civil defense chief has named ‘Molotov’s Bread Basket.’ …equipped with 3 winged propeller devices. Its contents are divided into compartments containing dozens of different incendiary and ignition bombs. When the propeller sets the torpedo into a powerful spinning motion, the bombs have opened from its sides and scattered around the environment. …the Russians are throwing bread to us in their own way.” Source: National Library of Finland

Finland 2024:


Threat? What threat? Musk says it’s just a holiday light show. These are all just Tesla food delivery vehicles clustered for “throwing bread to us in their own way” like the fire-bombing of winter 1939 again.

The timing of propaganda is no accident. Tesla strategically launches these demonstrations during holidays like Christmas, using celebratory moments to normalize dangerous capabilities. It’s reminiscent of the “Peace is our Profession” signs decorating scenes in Dr. Strangelove, using festive imagery to mask dangerous architectural realities.

British RAF exchange officer Mandrake in the film Dr. Strangelove. Note the automation patterns or plays surrounding the propaganda.

Tesla’s synchronized light shows, while appearing harmless, demonstrate a concerning architectural pattern: the ability to push synchronized commands to large fleets of connected vehicles with potentially limited or blocked owner override capabilities. What makes this particularly noteworthy is not the feature itself, but what it reveals about the underlying command and control objectives of the controversial political activists leading Tesla. The fact that Tesla owners enthusiastically participate in these demonstrations shows how effectively the security risk has been obscured — it’s a masterclass in introducing dangerous capabilities under the guise of consumer features.

More historical parallels? I’m glad you asked. Let’s examine how the Cuban Missile Crisis highlights the modern risks of automated systems under erratic control.

During the Cuban Missile Crisis, one of humanity’s closest brushes with global nuclear catastrophe, resolution came through human leaders’ ability to identify and contain critical failure points before they cascaded into disaster. Khrushchev had to manage not just thorny U.S. relations but also prevent independent actors like Castro from triggering automated response systems that could have doomed humanity. While Castro controlled a small number of weapons in a limited geography, today’s Tesla CEO commands a vastly larger fleet of connected vehicles across every major city – with demonstrably less stability and even more concerning disregard for fail-safe systems than Cold War actors showed.

As Group Captain Mandrake illustrated so brilliantly to audiences watching Dr. Strangelove, having physical override capabilities doesn’t help if the system can fail-unsafe and ignore them. Are you familiar with how many people were burned alive in Q4 2024 by their Tesla door handles failing to operate? More dead in a couple months than the entire production run of the Ford Pinto, from essentially the same design failure — a case study in how localized technical failures can become systemic catastrophes when basic safety principles are ignored.

Tesla’s ignorant approach to connected vehicle fleets presents a repeat of these long-known and understood risks at an unprecedented scale:

  • Centralized Control: A single company led by a political extremist maintains the ability to push synchronized commands to hundreds of thousands of vehicles or more
  • Limited Override: Once certain automated sequences begin, individual owner control may have no bearing regardless of what they see or hear
  • Network Effects: The interconnected nature of modern vehicles means system-wide vulnerabilities can cascade rapidly
  • Scale of Impact: The sheer number of connected vehicles creates potential for widespread disruption

As General Ripper in Dr. Strangelove would say, “We must protect our precious vehicular fluids from contamination.” More seriously…

Here are some obvious recommendations that seem to be lacking from every single article I have ever seen written about the Tesla “light discriminator” flashy demonstrations:

  1. Mandate state-level architectural reviews of over-the-air update systems in critical transportation infrastructure. Ensure federal agencies allow state-wide bans of vehicles with design flaws. Look to aviation and nuclear power plant standards, where mandatory human-in-the-loop controls are the norm.
  2. Require demonstrable owner override capabilities (disable, reset) for all automated vehicle functions — mechanical, not just software overrides
  3. Develop frameworks for assessing systemic risk in connected vehicle networks, drawing on decades of safety-critical systems experience
  4. Create standards for fail-safe mechanisms in autonomous vehicle systems that prioritize human control in critical situations

What Kubrick portrayed as satire — how a single failed override in an otherwise functioning system could trigger apocalyptic consequences — has quietly become architectural reality with Tesla’s rising threats to civilian infrastructure. The security community watches light shows while missing their Dr. Strangelove moment: engineers happily building systems where even partial failures can’t be stopped once initiated, proving yet again that norms alone won’t prevent the creation of doomsday architectures. The only difference? In 1964, we recognized this potential for cascading disaster as horrifying. In 2024, we’re watching people ignorant of history filming it to pump their social media clicks.

In Dr. Strangelove, the image of a single malfunctioning automated sequence causing the end of the world was played for dark comedy. Today’s Tesla demonstrations celebrate careless intentional implementations of equally dangerous architectural flaws.

60 years of intelligence thrown out? It’s as if dumb mistakes that end humanity are meant to please wall street, all of us be damned. Observe Tesla propaganda as celebrating the wrong things in the wrong rooms — again.