UK Tesla Kills Pedestrian

Almost all Tesla crashes I’ve read about lately involve dangerous drivers in their 20s.

Norfolk Police said a white Tesla Model 3 struck a man in his 30s on South Beach Parade at 21:43 BST on Tuesday. The force added that a pedestrian “sadly died as a result of his injuries” and the driver of the Tesla, a man in his 20s, continues to be questioned by officers.

It’s notable because the promise of CEO Elon Musk was that Tesla would be the safest car on the road by making driving safely more easy. It seems to have done the exact opposite and made the worst drivers far more deadly.

Anthropic Leads AI Safety Regulation as OpenAI Misses the Mark on California Bill SB1047

California’s SB-1386 revolutionized data breach notifications globally, proving that state-level regulation can drive widespread change even without federal action. I was on the front-lines in Silicon Valley detecting and preventing breaches in 2003 and remember a positive sea-change after California passed a law, like it was yesterday. In this light, OpenAI’s opposition to California’s proposed AI regulation SB-1047 (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act) appears shortsighted and out of touch with very important recent historical precedent.

While OpenAI cites concerns about innovation and talent retention, these arguments echo past objections to groundbreaking regulations that have since become industry standards without problem. Their hollow-sounding stance seems more like an attempt to avoid oversight rather than a genuine concern for progress.

Conversely, Anthropic’s evolving, more receptive approach to the revised bill demonstrates a nuanced understanding of regulatory dynamics and the potential for well-crafted legislation to foster responsible innovation.

Importantly, technologies like W3C’s Solid project (solidproject.org) already offer innovative solutions that could help AI companies meet the proposed bill’s requirements. Solid’s decentralized data architecture provides users with unprecedented control over their data, enabling easy opt-out mechanisms, transparent data usage tracking, and even simplified implementation of “kill switches” for AI systems.

OpenAI’s failure to recognize or advocate for such existing technological solutions further underscores their misalignment with forward-thinking approaches to AI governance. By embracing rather than resisting thoughtful regulation and leveraging innovative technologies like Solid, companies can position themselves at the forefront of responsible AI development.

In essence, Anthropic’s engagement with regulators and openness to adaptive solutions aligns with the historical pattern of California’s regulatory leadership driving positive industry-wide change. Meanwhile, OpenAI risks finding itself on the wrong side of history, missing an opportunity to shape responsible AI practices that could become global standards.

Tesla Cybertruck Catches Fire From Crash Into Fire Hydrant

Touching water continues to be a problem for Tesla engineering, as in this case it apparently caused their ill-conceived vehicle to burst into toxic flames.

The crash took place at 4:45 p.m. between Sam’s Club and Bass Pro Shop, off of Spur 54 and Bass Pro Drive, when a Cybertruck crashed into a fire hydrant. Assistant Fire Chief Ruben Balboa with the Harlingen Police Department said the Cybertruck’s battery ignited after the water from the fire hydrant soaked it.

Read that again. When a Cybertruck crashed into a fire hydrant and was soaked with water, it ignited into a fire.