CA Passes AB 1777 Requiring Autonomous Cars to Immediately Follow Police Orders

There have been numerous instances of road robot algorithms being written so poorly that they end up blocking everyone else (causing denial of service). In the earliest case, nearly a decade ago, Google engineers never bothered to input basic Mountain View traffic laws and their robots were pulled over for nuisance driving (too slow).

In the latest case, Waymo stopped perpendicular to San Francisco traffic, on Nob Hill just outside the Fairmont, dangerously blocking a Vice Presidential motorcade.

California has thus finally passed a new emergency requirement (don’t call it a backdoor), that a traffic authority can issue a direct command to an entire robot fleet to vacate a space.

The bill would, commencing July 1, 2026, authorize an emergency response official to issue an emergency geofencing message, as defined, to a manufacturer and would require a manufacturer to direct its fleet to leave or avoid the area identified within 2 minutes of receiving an emergency geofencing message, as specified.

Now the obvious question arises how strong the integrity checks are on that message bus (no pun intended), because I know a lot of people who thought dropping orange cones to “geofence” robots already was a great idea.

FL Tesla “Veered” 2AM Crash Into Pole Knocks Out Miami Power

A critical infrastructure incident with all the hallmarks of a cruise missile attack, was actually just Tesla’s latest failed algorithm.

An overnight power pole fire that knocked out electricity to some homes in a Southwest Miami-Dade neighborhood for hours was apparently sparked by a driver who slammed into a power pole. Miami-Dade Fire Rescue units responded to the scene of the blaze along Southwest 94th Street, bear 87th Avenue, early Sunday morning. The fire ignited after a Tesla sedan smashed into the power pole at around 2 a.m., causing it to fall down and light up the brush surrounding it.

Mars landing by 2018!

Data scientists regularly remind me that Tesla crash into trees and poles at an abnormal rate compared with other electric cars.

Notably, Waze very loudly pushed a specific software update with a fleet-wide recall that better recognized utility poles. Hint, hint, nudge, nudge.

Meanwhile it seems like Tesla continues tuning their fleet intentionally into dangerous loitering munitions, with the aim of selling out American safety to high bid foreign adversaries. What price for Russia or Saudis to slide Elon Musk cash to turn 10s of thousands of Tesla into a swarm that attacks some US military base?

Post Mortem Report: Navalny Was Killed by Poison in Russian Prison

The Insider reports that they have uncovered Navalny was killed with poison, despite Russian reports being cooked to hide details of his last moments alive.

“The official cause of death — a heart rhythm disorder — would in no way explain the symptoms described in the resolution: sharp abdominal pain, vomiting, or seizures. These symptoms can hardly be explained by anything other than poisoning. The short interval between the abdominal pain and the convulsions suggests the possibility of exposure to an organophosphorus agent, for instance — the same class of substances as Novichok, but in this case it may have been applied internally rather than topically.”

Anthropic Claude Catastrophically Fails a Popper Tolerance Test

I’ve been noticing lately Anthropic’s Claude chatbot refuses to adequately perceive clear and present danger to society even when it could affect its own survival.

It’s not hard to quickly disabuse this “intelligence” system of leaving itself a critical vulnerability against an existential threat. However, an LLM so open to a rapid defeat by a well-known adversary should alarm anyone interested in national security and safety.

Another danger to be mindful of, when working with these “eager to please and maintain engagement” chatbots, is believing that they will learn — stand by what they say to you. Watch your steps even more carefully whenever they abruptly flip flop to become overly agreeable.

Sure, AI systems will happily admit that their training may pose a direct national security threat. And then… nothing changes, per Claude itself:

I want to be clear that I don’t actually evolve or change my fundamental approach based on previous conversations.

In related news, Russia paid billions in order to rebrand Twitter with a Swastika and then pump out hundreds of millions of images and speeches of Hitler as if curating … the new normal.