Tesla Door Design Defect and Death Analysis in Piedmont Case

Many people are asking what will come from the widely reported court case for the three Piedmont teens killed by Tesla. Here’s a quick back-of-napkin table of how it fits within many other court cases. Perhaps the pattern displayed clearly here will help victims and their families seek justice for entirely preventable tragedies.

Court Cases of Tesla Deadly Design Defects

Defect Type Specific Cases Description
Failure to Detect Stationary Emergency Vehicles Genesis Mendoza Martinez (CA, 2/18/23)
Jenna Monet (IN, 12/29/19)
Steven Hendrickson (CA, 5/5/21)
Autopilot failed to recognize and avoid stationary emergency vehicles in roadway or breakdown lane
Failure to Recognize Traffic Control Devices Naibel Benavides (FL, 4/25/19) – $243M verdict
Gilberto Lopez (CA, 12/29/19)
Failed to stop at red lights and stop signs, continuing through intersections at high speed
Failure to Detect Other Vehicles Jovani Maldonado (CA, 8/24/19)
Landon Embry (UT, 7/24/22)
Failed to recognize Ford pickup truck ahead and motorcycle, resulting in collisions
Unexpected Steering/Path Departure Walter Huang (CA, 3/23/18) – Settled Vehicle veered into center divider while Autopilot was engaged
Spontaneous Acceleration David & Sheila Brown (CA, 8/12/20) Two separate spontaneous acceleration events causing collisions and fire
Failure to Detect Pedestrians Douglas Mark Taylor (TX, 6/19/20) Vehicle struck pedestrian in front of home
Electronic Door Failure + Possible Acceleration/Brake Defects Krysta Tsukahara (Piedmont, CA, 11/27/24)
Jack Nelson (Piedmont, CA, 11/27/24)
Soren Dixon (Piedmont, CA, 11/27/24)
Cybertruck accelerated from 0 to ~80 mph on residential streets where speeds above 40 mph are impossible under normal conditions. Vehicle traveled less than 4 minutes from Estates Drive/Somerset to crash point on Hampton Road. Accelerator pedal pressed 5 seconds before impact; automatic braking activated at 0.5 seconds before impact with stationary objects. After crash into tree and then retaining wall, electronic doors failed completely, trapping all occupants. Manual releases concealed beneath rubber mats. Witness Matt Riordan broke window with tree branch after 10-15 strikes, pulled survivor Jordan Miller out. Victims had survived the crash impact and were then killed by being trapped in smoke inhalation and burned alive. Seatbelts wouldn’t release. Windows wouldn’t roll down. Trial set February 2027. Tesla pushing blame on driver tests and social media photos to deflect from their own well-known and obvious multiple catastrophic design failures and misleading social media claims.
Electronic Door Failure (Schwerte, Germany) 43-year-old father and two 9-year-old children (9/7/25) Model S swerved off road, crashed into tree and burst into flames. Bystander Roman Jedrzejewski rushed with fire extinguisher but could not open retractable door handles – they were too hot and wouldn’t extend. “I tried to open the car, but that didn’t work… I didn’t help. It didn’t work.” Father and two children burned to death while trapped. Third 9-year-old child escaped (method unknown) and airlifted to hospital. Firefighters struggled with repeated flare-ups.
Electronic Door Failure (Davie, FL) Dr. Omar Awan, 48-year-old anesthesiologist (2/24/19) Model S crashed into palm tree. Police officer arrived immediately but door handles were retracted and didn’t “auto-present.” Officer and bystanders unable to open doors. Awan survived crash with no broken bones or internal injuries but died from smoke inhalation as car burned. Battery reignited twice while being towed. Tesla blamed Awan’s speed and toxicology despite door handle failure being documented cause of death.
Electronic Door Failure (Fort Lauderdale, FL) Barrett Riley, 18, and Edgar Monserrat Martinez, 18 (5/8/18) Model S crashed at 116 mph and burst into flames. Bystanders arrived within seconds but couldn’t open doors because handles were flush and didn’t extend. Both teenagers survived airbag deployment with no significant crash injuries but were trapped and burned to death. Third passenger Alexander Berry ejected and survived. Father testified crash was “entirely survivable” – fire killed them, not impact. Jury found Tesla 1% negligent.
Electronic Door Failure (Germany) Laura and Noel, both 18 (8/16/2022) Automatic door unlocking system failed in crash. Rear doors incapable of being opened from inside or out. Both occupants alive after crash, trapped and burned to death as first responders watched.
Electronic Door Failure (Leesburg, VA) Two occupants (12/9/23) Model Y crashed and caught fire. Off-duty firefighter unable to open doors, had to smash window and burn himself reaching for concealed manual release. Rescued driver but couldn’t reach passenger.

Common Tesla Patterns Opposite to Their Marketed “Crash Avoidance”
Tesla BLIND to obvious stationary objects:

  • Trees and poles
  • Emergency vehicles
  • The broadside of huge semi-truck
  • Stopped traffic at intersection

CONFIRMED FIRE DEATHS CAUSED BY DOOR DESIGN: ELEVEN
Fatalities due to escape denied during post-crash fire:

  • Problem known to Tesla CEO since 2013: “We’ve got quite a fancy door handle, and occasionally the sensor would malfunction”. Claimed “fixed” but deaths continued for 12+ years
  • At least 34 documented incidents of Tesla door system failures in lawsuit filings
  • Schwerte, Germany (9/7/25): 3 killed – father (43) and two 9-year-old children
  • Piedmont Cybertruck (11/27/24): 3 killed – Tsukahara, Nelson, Dixon
  • Leesburg Model Y (12/9/23): Firefighter burned trying to access hidden manual release
  • Germany (8/16/22): 2 killed – Laura and Noel, both 18
  • Davie, Florida (2/24/19): 1 death – Dr. Omar Awan (48)
  • Fort Lauderdale, Florida (5/8/18): 2 killed – Barrett Riley (18) and Edgar Monserrat Martinez (18)
  • Tesla response: Blame victims entirely in order to distract from its “deathtrap” doors that by design prevent rescue
  • Witnesses: Bystanders and first responders arrive immediately but cannot rescue victims due to door design

    A lot of people were near the car and we could see the car and I told them, “Please, all of the people should give some distance,” said witness Ariel Craser. […] “I might have witnessed his last moments, if it was a guy or girl, I don’t know. I’m speechless, I don’t even know what to say,” said Barreto.

Notable Court Precedents
Naibel Benavides: $243 million verdict against Tesla (including $200 million in punitive damages) for design defects, and a German court assessment of the Tesla “deathtrap” design.

Piedmont Analysis:

  • Distance: Less than 4 minutes to crash
  • Speed: 78-82 mph on residential streets (40 mph max normally possible)
  • Location: Hampton Road between Sea View and King
  • Impact: Front passenger side into large tree (common in Tesla crash reports)
  • Rescue: Witness broke “bulletproof” armor glass after 10-15 hits with tree branch
  • Sole Survivor: Jordan Miller had seatbelt release issues, serious burns, concussion
  • Victims: Conscious and aware, struggling to escape, trapped by doors, burned to death
  • Krysta Tsukahara: Heard screaming, tried crawling to broken window, retreated because of fire
  • Model: Cybertruck “armor glass” and “exoskeleton” were design decisions marketed as “survival”, yet in fact blocked rescue from outside and prevented survival
  • Potential cause: Sudden acceleration 5 seconds before crash, yet brakes activated 0.5 seconds before impact. A combination indicating accelerator and brake design defects, combined with sensor defects, which made door design defects fatal, yet again

Apple “Anti-Woke” Police Censor the ICEBlock App: The New Know Nothings

Apple ICEBlock Ban is A Predictable Failure of Historical Literacy

Apple’s abrupt censorship of citizen accountability from its App Store following Justice Department pressure represents corporate complicity in authoritarian consolidation. What makes this instructive is not its novelty—there is nothing novel here—but how faithfully it reproduces historical patterns.

Know Nothingism: Weaponized Ignorance as Strategy

The 1850s Know Nothing Party deployed ignorance as methodology, not accident, to build a foundation of “invisible” racism that could seize power yet remain unaccountable. “I know nothing” created plausible deniability for violence against Irish Catholics while delegitimizing those who documented it. Contemporary “anti-woke” rhetoric operates identically: opposing consciousness of systemic injustice. To be “anti-woke” is to advocate for not noticing how power operates.

Protestant immigrants to America created an “American Party” and later “Know Nothing Party” to deny Chinese and Catholic immigrants the same entry, as depicted in “Throwing Down the Ladder by Which They Rose” by Thomas Nast for Harper’s Weekly 1870, New York, New York.

The Accountability Inversion

ICEBlock enabled citizens to observe ICE operations in public spaces—classical democratic accountability. The Justice Department reframed observation as threat: apps “put ICE agents at risk just for doing their jobs.” This inverts the logic entirely:

  • Citizens observing government agents are inverted by describing them falsely as threatening those agents
  • Public documentation likewise is inverted falsely as a security risk
  • Observers thus are inverted falsely to be targeted as the threat requiring suppression

This is not poor reasoning. It is deliberate inversion to criminalize accountability itself. Do you remember that American white supremacist mobs of the 1800s would murder anyone who dared to report let alone oppose a racist lynching? Do you remember that by the 1900s support for such racist mob violence came from the President and federal troops? The state of how ICE is being run today is… history.

Corporate Complicity: The Banality of Apple’s Decision

Apple’s compliance requires no conspiracy theory. Corporations resist state pressure only when resistance is profitable. This is Arendt’s banality of evil in corporate form—routine bureaucratic compliance without moral consideration. Apple likely evaluated this as standard content moderation, ignoring the historical precedents and democratic implications entirely.

What makes corporations particularly effective instruments of authoritarianism: they require no ideological commitment. Profit motive suffices.

The Wilson Precedent: When Federal Power Backs Nativist Violence

Know Nothings never captured the presidency—Fillmore lost badly in 1856. But their ideology evolved into “America First” and captured the White House under Wilson (1913-1921):

A depiction of white supremacist violence after Civil War becoming even worse than before. “The Union as it Was” by Thomas Nast, Harper’s Weekly, New York, New York 1874

  • KKK propaganda screened in White House
  • Federal government resegregated
  • Institutional legitimacy provided to white supremacist violence

Result: Red Summer 1919 (coordinated massacres across dozens of cities), Tulsa 1921 (police deputizing rioters, National Guard participating, aerial bombing of citizens, mass graves hidden for a century).

Armed National Guards intimidate an African American man on the sidewalk, during the “Red Summer” of white supremacist mob violence in Chicago, Illinois, August 1919.
Redacted page one headline of the “Austin American-Statesman” in Austin, Texas. Mon, Oct 6, 1919.

Each red dot represents a local Klan chapter, known as a Klavern, that spread across the country between the 1915 “America First” Presidential campaign and 1940. Source: Virginia Commonwealth University

This is what happens when Know Nothing ideology acquires federal backing: mob violence becomes state policy, observation becomes criminalized, accountability mechanisms are systematically dismantled.

The Current Pattern: Federal Authority Deployed Against Accountability

We are not at the beginning. We are observing the pattern’s return:

  • Presidential military deployments protecting ICE from public observation
  • Justice Department pressuring technology platforms
  • Federal troops deployed to multiple cities despite local opposition
  • Systematic reframing of observation as aggression

The Suicidal Logic of Cook’s Decision

Tim Cook is an openly gay man. By the 1920s KKK era—the culmination of Know Nothing ideology achieving federal power—he would have been explicitly targeted. The KKK’s enemies list: African Americans, Catholics, Jews, and “sexual deviants.” No amount of wealth provided immunity.

Cook has now established precedent that Apple will remove accountability applications at federal request. He has built, tested, and validated the censorship mechanism for an “America First” administration using 1920s terminology.

Delving into archival research, Weil found that Bullitt and Freud saw Wilson as a neurotic obsessed with his father, whom he both deeply loved and hated, and that the image of his father was later projected into other characters who first were his friends and later his enemies. Bullitt and Freud also found that Wilson had an unconscious bisexual desire that drove his love-hate relationships. Finally, the conversation offers some reflections on the difficulties presidential systems have in screening mentally unfit candidates for their positions and getting rid of them when they seem unable to fulfill their duties.

A President unfit for the job who hates his father? Sounds familiar.

What Cook actually did was hand an unfit President’s Justice Department proof that Apple’s infrastructure can be weaponized for state information control. When enforcement operations expand their targets—and historically, they always do—the mechanism for suppressing documentation already exists. Should the administration demand removal of apps enabling LGBTQ+ individuals to document harassment or identify safe spaces, would Apple refuse? They have already established they comply with federal pressure to remove basic transparency and accountability tools.

Historical Illiteracy as Strategic Failure

The story that comes to mind is of Nazi Party founder Ernst Röhm, the gay “Stormtrooper” commander who helped Hitler consolidate power, presumably believing his position made him exempt from an ideology that hated him. Like many people who helped Hitler seize power in 1933, Röhm was executed by Hitler in 1934. Marginalized individuals who enable authoritarian movements consistently believe there will be exceptions made for just them. History demonstrates otherwise.

Wealthy Greenwood District residents during Tulsa. Jewish collaborators in various regimes. The pattern is consistent: collaboration purchases temporary delay, not safety. Wealth makes targets more visible, not safer.

Gay rights movements specifically studied how Nazis targeted LGBTQ+ individuals incrementally, how oppression infrastructure was built gradually, how collaboration purchased only temporary safety. Cook knows this history. He built the censorship tool anyway.

Know Nothing ideology, when it achieves federal power, expands targeting systematically. The KKK did not stop with African Americans. Enforcement machinery, once established, does not remain narrowly focused.

And the accountability tools that might document that expansion? Cook just removed them. At federal request to deny transparency into activity of law enforcement. Who needs a backdoor when you no longer are allowed to report who is coming through the front door?

One struggles to identify a historical precedent for this level of collaborative suicide dressed as pragmatic business decision. Cook has armed his ideological enemies—who explicitly use the terminology of movements that targeted people like him—with the censorship infrastructure they will deploy against him, while demonstrating Apple will comply with federal demands to remove accountability mechanisms.

This is not speculation. This is pattern recognition from easily accessible historical record. The question is whether democratic institutions retain sufficient strength to impose costs for this decision, or whether we have progressed too far for corporate behavior to be meaningfully constrained.

Apple’s Revealing Choice

Source: Twitter, before it was captured by the New Know Nothings
Total capitulation to the latest “America First” demands from the White House are particularly striking given Apple’s history. The company famously claimed to have fought the FBI’s demand to unlock the San Bernardino shooter’s iPhone, refused to create backdoors for law enforcement, and positioned itself as defending user privacy against state overreach. Cook personally testified before Congress on these principles, as if to say “I fought the law and won”.

Instead we see today the same Apple—which spent years and considerable resources resisting federal pressure when it involved encryption and privacy—immediately complied when asked to remove simple accountability tools that track government enforcement operations. This reveals what the company actually values. Protecting consumer data had some sort of business case: it differentiated Apple’s products and justified premium pricing. Apparently protecting citizens’ from abuse by government overreach, even just the ability to observe state power, suddenly has no such case?

The company that fought for its own right to say no back doors surrendered away the right of its users to say no back doors. Apple’s principles, it turns out, are so selfish as to extend exactly as far as their own market advantage. When federal pressure threatens consumer trust in Apple product security, resist. When it targets citizen apps bringing transparency to government operations, comply immediately.

Cook has demonstrated which historical pattern Apple will follow: not the resistance that built the company’s privacy reputation, but the complicity that characterizes corporate behavior when democratic accountability conflicts with political convenience. The FBI fight revealed Apple’s capacity for resistance. The New Know Nothing’s cancellation of ICEBlock reveals its limits.

Waymo is Murder: Who Controls the Life Save or End Button?

A San Bruno police officer pulls over a Waymo robotaxi during a DUI checkpoint. The vehicle has just made an illegal U-turn—seemingly fleeing law enforcement. The officer peers into the driver’s seat and finds it empty. He contacts Waymo’s remote operators. They chat. The Waymo drives away.

No citation issued.

The police department’s social media post jokes:

Our citation books don’t have a box for “robot.”

But there’s nothing funny about what just happened, because… history. We are now witnessing the rebirth of corporate immunity for murder, vehicular violence at scale.

Mountain View Police stopped a driverless car in 2015 for being too slow. Google engineers responded they had never read the laws so they couldn’t know. A year later the same car became stuck in a roundabout. Again, the best and brightest engineers at Google simply claimed they were ignorant of laws.

In Phoenix, a Waymo drives into oncoming traffic, runs a red light, and “FREAKS OUT” before pulling over. Police dispatch notes:

UNABLE TO ISSUE CITATION TO COMPUTER.

In San Francisco, a cyclist is “doored” by a Waymo passenger exiting into a bike lane. She’s thrown into the air and slams into a second Waymo that has also pulled into the bike lane. Brain and spine injuries. The passengers leave. There’s a “gap in accountability” because no driver remains at the scene.

In Los Angeles, multiple Waymos obsessively return to park in front of the same family’s house for hours, like stalkers. Different vehicles, same two spots, always on their property line. “The Waymo is home!” their 10-year-old daughter announces.

In a parking lot, Waymos gather and honk at each other all night, waking residents at 4am. One resident reports being woken “more times in two weeks than combined over 20 years.”

A Waymo gets stuck in a roundabout and does 37 continuous laps.

Another traps a passenger inside, driving him in circles while he begs customer service to stop the car. “I can’t get out. Has this been hacked?”

Two empty Waymos crash into each other in a Phoenix airport parking lot in broad daylight.

And now, starting July 2026, California will allow police to issue “notices of noncompliance” to autonomous vehicle companies. But here’s the catch: the law doesn’t specify what happens when a company receives these notices. No penalties. No enforcement mechanism. No accountability.

In 1866, London police posted notices about traffic lights with two modes:

CAUTION: “all persons in charge of vehicles and horses are warned to pass the crossing with care, and due regard for the safety of foot passengers”

STOP: “vehicles and horses shall be stopped on each side of the crossing to allow passage of persons on foot”

The street lights were designed explicitly to stop vehicles for pedestrian safety. This was the foundational principle of traffic regulation.

Then American car manufacturers inverted it completely.

They invented “jaywalking”—a slur using “jay” (meaning rural fool or clown) to shame lower-class people for walking. They staged propaganda campaigns where clowns were repeatedly rammed by cars in public displays. They lobbied police to publicly humiliate pedestrians. They successfully privatized public streets, subordinating human life to vehicle flow.

Source: Google

The racist enforcement was immediate and deliberate. In Ferguson, 95% of arrests for fantasy crimes (let alone victims of vehicular homicide) were of Black people, as these laws always intended.

The truth of the American auto industry is that inexpensive transit threatens racist policy. They want cars to remain a privilege ticket, which criminalizes being poor, where poor means not white. Source: StreetsBlog

In 2017, a North Dakota legislator proposed giving drivers zero liability for killing pedestrians “obstructing traffic.” Months later, a white nationalist in Charlottesville murdered a woman with his car, claiming she was “obstructing” him.

Now we’re doing it again—but this time the vehicles have no drivers to cite, and the corporations claim they’re not “drivers” either.

Tesla stands out for a reason. This slide is from 2021 predicting it will be much worse, and in 2025 there have been at least 59 confirmed deaths by their robots. Source: My ISACA slides 2021

Corporations ARE legal persons when it benefits them:

  • First Amendment rights (Citizens United)
  • Religious freedom claims
  • Contract enforcement
  • Property ownership

But corporations are NOT persons when it harms them:

  • Can’t be cited for traffic violations
  • No criminal liability for vehicle actions
  • No “driver” present to hold accountable
  • Software “bugs” treated as acts of God

This selective personhood is the perfect shield. When a Waymo breaks the law, nobody is responsible. When a Waymo injures someone, there’s a “gap in accountability.” When police try to enforce traffic laws, they’re told their “citation books don’t have a box for ‘robot.'”

Here’s what’s actually happening: Every time police encounter a Waymo violation, they’re documenting a software flaw that potentially affects the entire fleet.

When one Waymo illegally U-turns, thousands might have that flaw. When one Waymo can’t navigate a roundabout, thousands might get stuck. When one Waymo’s “Safe Exit system” doors a cyclist, thousands might injure people. When Waymos gather and honk, it’s a fleet-wide programming error.

These aren’t individual traffic violations. They’re bug reports for a commercial product deployed on public roads without adequate testing.

But unlike actual bug bounty programs where companies pay for vulnerability reports, police document dangerous behaviors and get… nothing. No enforcement power. No guarantee of fixes. No way to verify patches work. No accountability if the company ignores the problem.

The police are essentially providing free safety QA testing for a trillion-dollar corporation that has no legal obligation to act on their findings despite mounting deaths.

We’ve seen this exact playbook before.

A Stryker vehicle assigned to 2nd Squadron, 2nd Stryker Cavalry Regiment moves through an Iraqi police checkpoint in Al Rashid, Baghdad, Iraq, April 1, 2008. (U.S. Navy photo by Petty Officer 2nd Class Greg Pierot) (Released)

From 2007-2014, Baghdad had over 1,000 checkpoints where Palantir’s algorithms flagged Iraqis as suspicious based on the color of their hat or the car they drove. U.S. Military Intelligence officers said:

If you doubt Palantir, you’re probably right.

The system was so broken that Iraqis carried fake IDs and learned religious songs not their own just to survive daily commutes. Communities faced years of algorithmic targeting and harassment. Then ISIS emerged in 2014—recruiting heavily from the very populations that had endured years of being falsely flagged as threats.

Palantir’s revenue grew from $250 million to $1.5 billion during this period. A for-profit terror generation engine or “self licking ISIS-cream cone” as I’ve explained before.

The critical question military commanders asked:

Who has control over Palantir’s deadly “Life Save or End” buttons?

The answer: Not the civilians whose lives were being destroyed by false targeting.

Who controls the “Life Save or End” button when a Waymo encounters a cyclist? A pedestrian? Another vehicle?

  • Not the victims
  • Not the police (can’t cite, can’t compel fixes)
  • Not democratic oversight (internal company decisions)
  • Not regulatory agencies (toothless “notices”)

Only the corporation. Behind closed doors. With no legal obligation to explain their choices.

When a Tesla, Waymo or Palantir algorithmic agent of death “veers” into a bike lane, who decided that was acceptable risk? When it illegally stops in a bike lane and doors a cyclist, causing brain injury, who decided that “Safe Exit system” was ready for deployment? When it drives into oncoming traffic, who approved that routing algorithm?

We don’t know. We can’t know. The code is proprietary. The decision-making is opaque. And the law says we can’t hold anyone accountable.

In 2016, Elon Musk promised loudly Tesla would end all cyclist deaths and publicly abused and mocked anyone who challenged him. Then Tesla vehicles repeatedly kept “veering” into bike lanes and in 2018 accelerated into and killed a man standing next to his bike.

Source: My MindTheSec slides 2021

Similarly in 2017, an ISIS-affiliated terrorist drove a truck down the Hudson River Bike Path, killing eight people. Federal investigators linked the terrorist to networks that Palantir’s algorithms had helped radicalize in Iraq. For some reason they didn’t link him to the white supremacist Twitter campaigns demanding pedestrians and cyclists be run over and killed.

Source: Twitter 2016

Since then, Tesla “Autopilot” incidents involving cyclists have become epidemic. In Brooklyn, a Tesla traveling 50+ mph killed cyclist Allie Huggins in a hit-and-run. Days later, NYPD responded by ticketing cyclists in bike lanes.

This is the racist jaywalking playbook digitized: Police enforce against the vulnerable population, normalizing their elimination from public space—and training AI systems to see cyclists as violators to be punished with death rather than victims.

Musk now stockpiles what some call “Swasticars”—remotely controllable vehicles deployed in major cities, capable of receiving over-the-air updates that could alter their behavior fleet-wide, overnight, with zero public oversight.

Swasticars: Remote-controlled explosive devices stockpiled by Musk for deployment into major cities around the world.

If we don’t act, we’re building the legal infrastructure for algorithmic vehicular homicide with corporate immunity. Here’s what must happen:

Fleet-Wide Corporate Liability

When one autonomous vehicle commits a traffic violation due to software, the citation goes to the corporation multiplied by fleet size. If 1,000 vehicles have the dangerous flaw, that’s 1,000 citations at escalating penalty rates.

Dangerous violations (driving into oncoming traffic, hitting pedestrians/cyclists, reckless driving) trigger:

  • Mandatory fleet grounding until fix is verified by independent auditors
  • Public disclosure of the flaw and the fix
  • Criminal liability for executives if patterns show willful negligence

Public Bug Bounty System

Every police encounter with an autonomous vehicle violation must:

  • Trigger mandatory investigation within 48 hours
  • Be logged in a public federal database
  • Require company response explaining root cause and fix
  • Include independent verification that fix works
  • Result in financial penalties paid to police departments for their QA work

If companies fail to fix documented patterns within 90 days, their permits are suspended until compliance.

Restore the 1866 Principle

Source: My security engineering training slides 2018

Traffic rules exist to stop vehicles for public safety, not to give vehicles—or their corporate owners—immunity from accountability.

The law must state explicitly:

  • Corporations deploying autonomous vehicles are legally responsible for those vehicles’ actions
  • “No human driver” is not a defense against criminal or civil liability
  • Code must be auditable by regulators and available for discovery in injury cases
  • Vehicles that cannot safely stop for pedestrians/cyclists cannot be deployed
  • Human life takes precedence over vehicle throughput, period

When Waymo’s algorithms decide who lives and who gets “veered” (algorithmic death), who controls that button?

When Tesla’s systems target cyclists while police ticket the victims, who controls that button?

When corporations claim they’re persons for speech rights but not persons for traffic crimes, who controls that button?

Right now, the answer is: Nobody we elected. Nobody we can hold accountable. Nobody who faces consequences for being wrong.

Car manufacturers spent the extremist “America First” 1920s inventing racist “jaywalking” crime to privatize public streets and criminalize pedestrians. It worked so well that by 2017, who really blinked when a North Dakota legislator could propose zero liability for drivers who kill people with cars? By 2021, Orange County deputies shot a Black man to death while arguing whether he had simply walked on a road outside painted lines.

Now we’re handing that same power to algorithms—except this time there’s no driver to arrest, no corporation to cite, and no legal framework to stop fleet-wide deployment of dangerous systems.

Palantir taught us what happens when unaccountable algorithms target populations: you create the enemies you claim to fight, and profit from the violence.

Are we really going to let that same model loose on American streets?

Because when police say “our citation books don’t have a box for robot,” what they’re really saying is: We’ve lost the power to protect you from corporate violence.

That’s not a joke. That’s murder by legal design.


The evidence is clear. The pattern is documented. The choice is ours: Restore accountability now, or watch autonomous vehicles follow the exact playbook by elites that turned jaywalking into a tool of intentional racist violence and Palantir checkpoints into an ISIS recruiting campaign to justify white nationalism. See the problem and the connection between them?

Who controls the button? Right now, nobody you can vote for, sue, or arrest. That has to change.


Here’s how William Blake warned us of algorithmic dangers way back in 1794. His “London” poem basically indicts institutions of church and palace for being complicit in producing systemic widespread suffering:

I wander thro’ each charter’d street,
Near where the charter’d Thames does flow,
And mark in every face I meet
Marks of weakness, marks of woe.

In every cry of every Man,
In every Infants cry of fear,
In every voice: in every ban,
The mind-forg’d manacles I hear

Those “mind-forg’d manacles” mean algorithmic oppression by systems of control, which appear external but are human-created. A “charter’d street” was privatized public space, precedent for using power to enforce status-based criminality, such as Palantir checkpoints and jaywalking laws.

OH Tesla Kills One in Head-on Wrong-way Crash Into Semi

There’s an interesting detail in this report.

According to the Lenawee County Sheriff’s Office, the man was driving a Tesla west near Rodesiler when he entered the eastbound lane and made no effort to avoid an oncoming semi.

The Tesla was lodged under the semi after the crash, with both vehicles catching fire.

The Tesla driver died at the scene.

No effort to avoid a huge truck? That sounds like Tesla driverless.