Report to GM Board of Directors on Cruise “Sev-0” Oct 2 Crash Into Pedestrian

Reading the full report I found an investigation table insightful.

Source: REPORT TO THE BOARDS OF DIRECTORS OF CRUISE LLC, GM CRUISE HOLDINGS LLC, AND GENERAL MOTORS HOLDINGS LLC REGARDING THE OCTOBER 2, 2023 ACCIDENT IN SAN FRANCISCO, January 24, 2024

The crux of the complaints relate to GM not transmitting the “dragging” data, which establishes why and how the robot likely hurt this pedestrian after impact far worse than a human driver would have.

Further along in the report it’s made plain how Cruise was not disclosing that their robot did the wrong thing or that it significantly increased harms to the pedestrian.

Communications members also continued to give reporters the following bullet point on background: “[t]he AV came to a complete stop immediately after impacting the struck pedestrian, even though by this time Cruise, including senior members of its communications team, knew that the AV moved forward immediately after striking the pedestrian. Cruise communications team members gave this statement to media reporters after the 6:45 a.m. SLT meeting, some of whom published it, well into the afternoon of October 3, including Forbes, CNBC, ABC News Digital, Engadget, Jalopnik, and
The Register.

That’s not good. But the worst part is when Cruise staff defined harming pedestrians in an urban environment as an “edge” case they aren’t concerned about.

Vogt reportedly characterized the October 2 Accident as an extremely rare event, which he labeled as an “edge case”.

Cold. Cruel. Immoral.

This is a good reminder that American “death corridors” in cities were no accident. And I’ve been saying robots on roads will kill a lot of pedestrians since 2016, the exact opposite of edge.

OR Tesla Totalled by 24 Year Old With Five Years of Driving Violations

Makes sense why a violent repeat offender chose a Tesla to stomp the accelerator straight into a wall.

In 2019, he pleaded guilty to driving under the influence of intoxicants, hit-and-run and fourth-degree assault. In 2022, he pleaded guilty to driving under the influence of intoxicants and recklessly endangering another person.

At 8am in Bend, Oregon he was practically engaged in an act of domestic terrorism.

Police reviewed video footage, which they say showed a white Tesla driving more than 60 mph while heading south in the northbound lanes and sidewalk on NE Third Street in Bend. The Tesla crashed at the entrance to the US Foods Chef’Store on Third Street, rolled “multiple times” and stopped at the retaining wall at U.S. Bank.

Driving the wrong way at high speed and on sidewalks? It’s surprising he didn’t kill anyone, like in the other tragic Tesla manslaughter case in Oregon by a known repeat offender.

If you’re in Oregon and see a Tesla, be ready for a disaster.

NRA Offers Reward to Children Who Promote Early Death From Guns

Business Insider offers some food for thought on the NRA paying children a pittance to not only accept but actively promote excessive gun violence, self-harm campaigns killing them and their friends/family.

Leaving aside the oddness of asking the youngest of grade schoolers how the constitutional right to bear arms affects them personally, the contest raises alarms for gun-control advocates.

Gun violence was the No. 1 cause of death for US children in 2021…

“They’re selling a lie, and it’s a very dangerous lie,” Brown [the president of the gun-safety group Brady] added. “They are selling it to your kids, and they don’t care if it’s killing them.”

Imagine the tobacco companies sponsoring contests for children to write about cancer-causing smoking as a Constitutional freedom.

By the time they are capable of making a mature judgment, their health may be harmed irrevocably and their decisional capacity impaired by the product’s addictive qualities.

Dead. I think the analysis misstated that by the time they are capable… these targeted kids, and those around them, already are dead. I say this as a person who grew up in the heart of rural American gun nut culture and by 12 years old I already had been shot and wounded, requiring hospitalization.

The number of children and teens killed by gunfire in the United States increased 50% between 2019 and 2021…

Also as a historian I have to point out that British soldiers in WWII reported a strategy of God and Chocolate melted the Nazi child’s cold coal heart full of false fears and nightmares. Orientation of German kids towards mass suicide was a result of rapidly disseminated and highly targeted (authoritarian) disinformation.

How to Speed Up Military Drone Innovation in America

German news captures 2022 sentiment that Russia is growing weaker by the drone

In a rather superficial analysis featured on War on the Rocks, the discourse on artificial intelligence (AI) reveals a surprising lack of depth. In essence, the crux of the argument suggests that by lowering expectations, particularly in terms of reliability, the concept of “innovation” is reduced to nothing more than pushing a colossal and conveniently uncomplicated “plug and pray” button.

The authors’ apparent reductionist perspective not only fails to grasp the intricacies of AI’s potential in the realm of warfare but also overlooks the nuanced challenges that seasoned military analysts, with decades of combat experience, understand are integral to the successful integration of advanced technologies on the battlefield.

America’s steadfast commitment to safety and security assumes that the United States has the three to five years to build said infrastructure and test and redesign AI-enabled systems. Should the need for these systems arise sooner, which seems increasingly likely, the strategy will need to be adjusted.

When considering America’s commitment to safety and security, a closer examination reveals a steadfast commitment inherently implies less reliance on assumptions. The authors, however, leave a significant void in their arguments by not adequately clarifying their position on this. The closest semblance of an alternative is their proposition of a vague aspirational path labeled as AI “assurance,” positioned between extremes of measured caution and imprudent haste.

…urgently channel leadership, resources, infrastructure, and personnel toward assuring these technologies.

A realist imperative however underscores the dynamic nature of the geopolitical landscape, necessitating a proactive stance rather than a reactive one. Three to five years ahead, is a tangible goal instead of shrinking release cycles to the imprudent “burn toast, scrape faster” mentality. The strategic imperative lies not merely in constructing a sophisticated AI apparatus but also in ensuring resilience and adaptability to the predictable exigencies of future conflict scenarios.

Here are a few instances of downrange events that unequivocally warrant the disqualification of AI innovations, a consideration surprisingly absent in the referenced article:

Source: My presentation on hunting robots, 2023 RSA SF Conference

This War on the Rocks article by a “native Russian speaker”, however, shamelessly bestows excessive praise on Russia for acceleration towards an ill-concieved “automated kill chain” characterized by total disregard for baseline assurances. In doing so, the authors fail to acknowledge the very pivotal point in drone engineering from the battlefield — oppressive Russian corruption and hollow patronage was left behind as Ukraine strongly asserted measured morality and quality control, which has been the true catalyst for Ukraine’s rapid and successful drone innovations (leaving the Russians always only in a clueless catch up mode).

Russia’s reckless pursuit and indiscriminate deployment of AI, as highlighted in the War on the Rocks article, contribute to the mounting evidence of Russian tanks and troops being grossly outmatched by adversaries who prioritize fundamental training and employ sophisticated countermeasures.

An overwhelming desire for switching into the “at any cost” haste of catch up mode lacking any morality is of little benefit when it brings about overwhelming technical debt and self-destructive consequences.

Remarkably, the authors neglected to provide an explanation for their omission of Ukrainian strides in “small, relatively inexpensive consumer and custom-built drones” as an integral aspect of American military strategy of effective targeting. Equally puzzling is their apparent belief that innovation ceases when others replicate it.

Taking a broader perspective, the American military ethos, characterized by augmentation for skilled professionals in tanks, has demonstrably outshone Russia’s reliance on over-automation guided by disposable conscripts stupidly killing themselves even faster than their enemy can. Despite Russia’s boastful rhetoric, their inability to distinguish between effective and ineffective strategies echoes historical patterns familiar to statisticians of World War II examining the Nazi lack of technological prowess.

AI, far from being an exception to historical trends, appears to be a recurrence of unfavorable chapters. Reflect on a crossbow, longbow, repeating rifle, or even Churchill’s “water” tanks (e.g. how America ended up mass-producing Britain’s innovations)… and the trajectory becomes evident. Throughout history, advancements in genuine measures of safety and security (weapon assurance as a practical measure of safety and security) have defined battlefields for centuries.

Abraham Lincoln famously urged the prudent use of time to sharpen an axe before felling a tree, a maxim applicable to any technology. The historical narrative strongly indicates that AI, as a technological frontier, will only serve to underscore the enduring wisdom encapsulated in the words of the President who delivered an unconditional victory in America’s Civil War.