US Court Green Lights Execution by Tesla: Probation for Robotic Manslaughter

In California, a Tesla equipped with Autopilot ignored red lights and caused fatalities, leading to its owner receiving only a two-year probation. Consequently, it appears that the penalty for such an incident, where a Tesla with autonomous capabilities is involved in causing harm, is relatively lenient. Enabling Autopilot and directing it towards someone seems to result only in a charge of vehicular manslaughter, potentially giving a green light to robot attacks.

A 28-year-old man who was behind the wheel of a Tesla Model S on autopilot in 2019 when it ran a red light in Gardena and slammed into a car, killing two people, authorities said, has been sentenced to two years of probation after pleading no contest to two counts of vehicular manslaughter.

This outcome is disheartening as it appears that the misuse of Autopilot is now seemingly protected from prosecution, especially when it is exploited for deliberate and motivated assassinations (even in the case of mass casualties) aimed at advancing certain social or political causes.

ALTHOUGH MOST PERSONS WOULD NOT ADMIT THE WOULD-BE ASSASSIN’S RIGHT TO JUDGE, CONDEMN, AND EXECUTE, THEY MIGHT CONCEDE THE RIGHT OF AN INDIVIDUAL TO JUDGE, CONDEMN, AND ORDER EXECUTION, AS THEY DID TO HARRY TRUMAN IN HIS USE OF THE HYDROGEN BOMB.

The widespread release of faulty software in Tesla vehicles, which can be centrally controlled and planned, has raised a significant concern. These vehicles, operating with features similar to loitering cluster munitions (known as Autopilot attacks), pose a real threat. This situation implies that even without the need for sophisticated Intercontinental Ballistic Missiles (ICBMs), certain entities like white nationalist groups or North Korea could potentially compromise data integrity to employ swarms of Teslas for dangerous actions, such as ignoring traffic lights or intentionally veering onto sidewalks and into buildings, causing harm and chaos.

Furthermore, the manipulation of Tesla’s Autopilot could lead to individuals escaping severe charges, downplaying their actions as simple negligence rather than being held accountable for intentional actions that result in explosions and capital murder. This raises serious concerns about the potential misuse and consequences of such technology.

Yoshihiro Umeda

In April 2018, Yoshihiro Umeda tragically lost his life due to a Model X Autopilot incident. The owner of the Tesla claimed to be asleep at the time of the incident, as the vehicle suddenly accelerated, crashing into a group of people standing in front of a large white van. This incident bore striking resemblance to a terror attack.

Following this unfortunate event, Umeda’s family filed a wrongful death lawsuit against Tesla, holding Autopilot responsible, and claiming that defective vehicles were the cause of the tragedy. However, in 2022, the lawsuit was coldly dismissed from federal court in California and unexpectedly moved to Japan, raising suspicions about the reasons behind the decision.

For years, Tesla managed to prolong the legal proceedings, putting immense pressure on the victim’s family through their arguments. While Tesla continued to sell cars in Japan, incidents of these vehicles causing harm and even fatalities among the Japanese population were reported. Despite this, the company argued that it should be exempt from prosecution under US or California laws because “Japan is not a party to the Hague Convention.”

“Tesla further argues, persuasively, that access to third party evidence in Japan for a proceeding in the U.S. would be at best extraordinarily cumbersome, time-consuming, expensive and with uncertain results. Dkt. 15-1 (Yamashita Decl.) ¶ 15 (“Japan is not a party to the Hague Convention …”) Umeda v. Tesla Inc., Case No. 20-cv-02926-SVK, 10 (N.D. Cal. Sep. 23, 2020)

Let’s clarify some misleading information.

First, contrary to what a Tesla lawyer might claim or a possibly confused US Judge, Japan is indeed a party to the Hague Convention. This fact is supported by concrete evidence. The Convention on the Service Abroad of Judicial and Extrajudicial Documents in Civil or Commercial Matters (known as the Hague Service Convention) was adopted on November 15, 1965, at the Hague Conference on Private International Law. Japan signed this convention on March 12, 1970, and subsequently ratified it on May 28, 1970. It officially entered into force on July 27, 1970.

In the past, American courts have utilized this Hague Service Convention when dealing with Japanese auto manufacturers for many years. Therefore, it is essential to dispel any misinformation and recognize that Japan is, in fact, a party to the Hague Convention.

During the Hague Conference on Private International Law in 2008, Japan provided its self-assessment regarding the general operation of the Service Convention. The rating they gave themselves is as follows:

[ ] Excellent
[X] Good
[ ] Satisfactory
[ ] Unsatisfactory

In this self-assessment, Japan marked “Good” to indicate their perception of the general operation of the Service Convention at that time.

Additionally, it’s essential to note that the Hague Convention is not the sole authority governing court discovery. The European Union (EU) has its own superseding international Council Regulation (EC) No. 1393/2007 that deals with this matter. Furthermore, even before the Hague Convention, Japan had already been operating under the 1963 Consular Convention with the United States for handling requests to obtain evidence.

Japan’s own convention contains an Article 17(1)(e), as mentioned in the link provided, stating that a US attorney must have any Japanese witness voluntarily offer information and cannot compel them to do so. In light of this, Tesla’s objection might have been more of a rejection of victim rights than anything else.Third, service through the Central Authority under the Hague Service Convention is no easy and fast picnic either. It requires months simply due to preparation of the documents and the usual multi-lingual bureaucracy of an “official language” such as Japanese (Article 5 of the Service Convention).

Tesla has deliberately employed stall tactics and delays as a cruel defense strategy to hinder the progress of litigation. Their repeated efforts to avoid US courts, specifically English reporting, have caused Japanese families who are grieving over the loss of their loved ones years of unnecessary burden, making it very clear who has intentionally turned the judicial process into something “extraordinarily cumbersome, time-consuming, expensive, and uncertain.”

When considering the entire situation, it becomes evident that Tesla’s Autopilot was clearly unsafe, defective, and posed an immediate threat to pedestrians back in 2018. However, the company has managed to evade accountability in the court system. If the US courts had held Tesla’s robotics accountable for their defects, it is possible that numerous deaths, possibly dozens if not hundreds, could have been prevented.

Elaine Herzberg

As many of us are aware, in March 2019, Arizona prosecutors concluded that Uber was not criminally responsible for their “self-drive” vehicles. However, the “back-up” driver involved in the incident was charged with negligent homicide.

This echoed Arizona Governor’s Executive Order 2018-04 inviting Uber to operate in the state.

If a failure of the automated driving system occurs that renders that system unable to perform the entire dynamic driving task relevant to its intended operational design domain, the fully autonomous vehicle will achieve a minimal risk condition; The fully autonomous vehicle is capable of complying with all applicable traffic and motor vehicle safety laws and regulations of the State of Arizona, and the person testing or operating the fully autonomous vehicle may be issued a traffic citation or other applicable penalty in the event the vehicle fails to comply with traffic and/or motor vehicle laws

The charges against the human operator of Uber’s autonomous vehicle were based on the failure of the vehicle to achieve a “minimal risk condition,” as specified in the Executive Order (EO). However, a significant issue with this liability concept arose when Arizona had previously defined a “minimal risk condition” as a robot operating without any human operator at all. This conflicting definition created a glaring problem in the case.

MINIMAL RISK CONDITION. A low-risk operating mode in which a fully autonomous vehicle operating without a human person achieves a reasonably safe state, such as bringing the vehicle to a complete stop, upon experiencing a failure of the vehicle’s automated driving system that renders the vehicle unable to perform the entire dynamic driving task.

This paragraph contains two major issues. Firstly, it is redundant to describe “minimal risk” from a robot as a “reasonably safe state.” This requirement lacks meaningful clarity as it essentially says that safety should be reasoned to be safe, which is not specific enough. Additionally, the state’s definition of a “fully autonomous” system as one “designed to function as a level four or five” creates a potential danger. Instead of focusing on preventing harms such as injury or death, the emphasis seems to be on designing a system that can predict when it is unable to predict, which is counterintuitive and confusing.

Secondly, the paragraph highlights that a temporary low-skilled human operator can be held liable for “dynamic” robots, even in cases of full autonomy with no human operator present. This creates a concerning accountability loophole for potentially harmful robots.

In essence, the paragraph suffers from vague safety definitions and poses a risk of holding the wrong parties accountable for the actions of autonomous systems, thereby leaving room for potential misuse and harm.

The situation went from bad to worse when the Arizona district attorney had to step back from the investigation due to their prior involvement in actively promoting Uber’s “self-drive” technology to gain people’s trust. This created a conflict of interest in handling the case. The district attorney’s campaign suggested that Uber’s autonomous vehicles reduced risk since they couldn’t drink and drive, implying they were safer than human drivers under the influence of alcohol.

However, despite this argument, the liability for the incident fell on Uber’s “back-up” driver, who could be argued to have been more impaired in perception and judgment than someone under the influence of drugs or alcohol. This raises serious questions about the fairness and accuracy of the assignment of responsibility in the case.

…she was looking toward the bottom of the SUV’s center console, where she had placed her cell phone at the start of the trip.

In addition to the previous issues, it was revealed that Uber had deliberately disabled their vehicle’s driver-assistance and braking technology, which was specifically designed to enhance pedestrian safety. This action raises significant concerns about approaches to robot manufacturer responsibility for safety and transfer of responsibility to hamstrung operators when ensuring the well-being of pedestrians and other road users.

At 1.3 seconds before impact, the self-driving system determined emergency braking was needed. But Uber said, according to the NTSB, that automatic emergency braking maneuvers in the Volvo XC90 were disabled while the car was under computer control in order to “reduce the potential for erratic vehicle behavior.”

After disabling the automatic braking system, Uber also neglected to implement any alerting mechanism to inform their “back-up” driver that the car had detected the need for emergency braking. In simple terms, Uber’s system failed to recognize a human in its path, misclassifying her as both a vehicle and a bicycle, and consequently, it didn’t prevent the harm despite the object being clearly visible at a considerable distance. The blame for the incident was then shifted to the driver.

To their credit, Uber eventually decided to shut down their self-driving operation. In contrast, Tesla seemed to downplay their involvement in a pedestrian death during the same time in 2018 and continued to charge customers a substantial premium to use unsafe autonomous vehicles on public roads, seemingly pushing the boundaries of what they could get away with in terms of already proven hazardous technology.

Gilberto Alcazar Lopez and Maria Guadalupe Nieves-Lopez

The above two significant incidents involving autonomous vehicles provide an important backdrop for a tragic event in 2019. During this later incident, a Tesla, as has been observed in several cases, disregarded several speed warnings and ran a red light at a speed exceeding 70mph. This predictably led to a collision with another car, resulting in the death of its two occupants.

Los Angeles police Officer Alvin Lee testified that numerous traffic signs warn drivers to slow down as they approach the end of the freeway. […] The case was believed to be the first felony prosecution filed in the U.S. against a driver using widely available partial-autopilot technology.

According to the NHTSA, their investigation teams have been involved in 26 crashes related to Tesla’s Autopilot since 2016, which resulted in at least 11 deaths. However, data collected from local Tesla crash reports on tesladeaths.com shows a much higher tally of 432 deaths, with Autopilot being at fault in 38 cases, more than three times the NHTSA’s number.

Similar to the 2018 Tesla victim family in Japan, the 2019 victim families in California have filed lawsuits against Tesla for selling defective vehicles. A joint trial is scheduled for mid-2023, and this time Tesla cannot prevent it from being conducted in English and within the United States.

One can’t help but wonder if this trial could have been avoided if Yoshihiro Umeda’s case had been given proper attention.

The lenient two-year probation given for the intentional violent use of a potentially dangerous autonomous system raises serious concerns about preventable deaths, especially if used by socially or politically motivated actors on public roads. The implications are significant, with even global figures like Kim Jong Un, Trump or Putin possibly observing a new widespread destructive threat vector with interest.

Tesla Quietly Admits in 2023 Autopilot Isn’t Ready For Public Roads

Flashback to 2016 when Tesla very confidently said more of their cars on the road would mean less injury, an absolute and total lie.

As more real-world miles accumulate and the software logic accounts for increasingly rare events, the probability of injury will keep decreasing. Autopilot is getting better all the time…

Yeah, no. What do we call the opposite of that false prediction? Deaths have become increasingly common because of Tesla design and manufacturing defects.

Source: Tesladeaths.com

A new Tesla job post in Chicago is very revealing how Tesla is now in a panic state, rebooting like they don’t know what to do, while gaslighting tactics are still their thing.

Job Type: Full-time […] This is an at-will, temporary position. The assignment is expected to last 3 months.

Nothing says full-time like being a temporary position. What is seasonal about driving in America? As if people had a choice to not use their cars based on the seasons.

It’s a temp job, not being setup to be paid fairly as a temp job. The simple math is temp staff are paid overtime (1.5X hourly wage) for any time worked over 40 hours per week.

Tesla wants to demand a lot of hours of work per day, yet will not pay per hour. Tesla will demand a lot of learning and sharing of information, yet will throw the learner and sharer out after three months.

To be fair, Tesla might not expect the person to live longer than 90 days. Benefits should describe the funeral and burial a Tesla temp driver’s family should expect.

The role is a high-risk extreme danger robot operator.

Job Category: Autopilot & Robotics […] Operate a vehicle in a designated area for data collection […] Minimum 4 years of licensed driving experiences

This is truly fascinating because the whole basis of Tesla dumping it’s unfinished and mislabeled “Autopilot” into public roads was that it could be operated by anyone with almost no experience or restrictions.

Now, after years of mounting fatalities from “veered” collisions, Tesla quietly is bringing in unskilled temp drivers?

What is it about Tesla owners’ driving that has the company so worried now, such that they’re paying people behind the scenes to be their drivers instead?

  • Illinois
  • Minnesota
  • Utah
  • Texas
  • Colorado
  • Washington
  • California
  • Florida
  • Georgia
  • New York
  • Arizona
  • Massachusetts

Why so few states? And why only four years of experience? Is that the skill level you want behind your driverless robot?

Not me.

Four years is not even beginner level for driving expertise, let alone vehicle knowledge required to assess “proper working order”.

Remember how Tesla based its marketing on robots having huge amounts of driving experience, arguing we shouldn’t trust drivers with just a few years?

Now they’re hiring drivers in isolated environments with almost no expertise.

So to recap, it’s full-time yet temporary, driverless yet driving, with beginner driving experience expected to be a road expert/trainer for “quality”.

Gaslighting. The literal opposite to quality.

It reads to me like Tesla has had no idea what it’s been doing and has started to panic now due to regulators noticing fraud. Like a college student trying to hire other students as tutors to pass final exams after wasting four years on partying.

Why hire a dedicated driver today if the entire promise of autopilot was supposed to manifest through the quick and early years of needlessly throwing away the lives of any and every Tesla owner? That promise must have been false.

I think we may be witnessing that Tesla knows its robots are only getting worse over time, but they don’t understand why.

How many people would still be alive today if Tesla had not deployed such a faulty robot design? And can an inexperienced temp worker really bring the unnecessary Tesla fatality rate down?

Related:

Tesla more likely to run over Black people

A Simple Reason Why Tesla Keeps Crashing into Police Cars

Unsafe by Design: Tesla Fails to See Children, Red Lights, Emergency Vehicles…

German Police Crack the Case of Stolen Celtic Gold Coins

Nine months ago in November 2022, a multi-stage heist disabled security at a German museum to steal Celtic gold coins worth almost 2 million euros. Now police say they have recovered coins and solved the case.

German police said Wednesday that four suspects were arrested.

The Bavarian Criminal Police Office announced in Munich that the suspects were arrested during a police operation in the greater Schwerin area, located in Germany’s northeastern state of Mecklenburg-Western Pomerania.

The museum lost 483 Celtic gold coins from 100 BCE. This large theft was somewhat notable due to tradecraft: regional telecommunications were cut in order to disable the museum alarm system, and the door logs recorded egress in less than ten minutes.

Organized crime was naturally suspected at first and Interpol setup a special investigation unit. Police now say more details will be released this week.

NHTSA Investigating Veered Tesla Head-on-Collisions

The AP report today suggests a tragic head-on collision in South Lake Tahoe was caused by Tesla engineering defects.

A Tesla Model 3 and Subaru Impreza collided head on during the evening of July 5, according to state police, and the driver of the Subaru died a short time later. Local media reports say that an infant that had been traveling in the Tesla died last week.

The National Highway Traffic Safety Administration has been looking into a string of accidents involving Teslas that are believed to have had automated driving technology installed.

[…]

Sending special investigation teams to crashes means that the agency suspects the Teslas were operating systems that can handle some aspects of driving, including Autopilot and “Full Self-Driving.” Despite the names used for the technology [they are unable to do what they are called].

I’ve been closely monitoring this case since it was first reported, given its similarity to another recent veered Tesla head-on collision with a Subaru.

If the rise in “Autopilot” crashes are found by investigators to be a repeatable engineering defect… then roads clearly should be treated as unsafe anytime a Tesla is on them.

Reuters puts it like this.

The National Highway Traffic Safety Administration said Tuesday it is opening a new special crash investigation into a fatal accident in California involving a 2018 Tesla Model 3 where advanced driver assistance systems are suspected of having been used.

Since 2016, NHTSA has opened more than three dozen Tesla special crash investigations in cases where advanced driver assistance systems such as Autopilot were suspected of being used, with 20 crash deaths reported.

[…]

This is the first new special crash investigation open since March.

How many weeks before the next person is killed from Tesla dumping unsafe product at a high rate into public spaces?