Menlo Park Police Criticize Their $350K Tesla Patrol Cars as Unsafe and Unfit

In 2021 the Menlo Park politicians voted into the budget Ford, Chevy and Tesla electric cars. Specifically, three Tesla were assigned to their police department.

…the police department will initiate a $350,000 pilot program to test three Tesla electric cars…

Now they say the Tesla are unfit for purpose and unsafe.

…autopilot would occasionally interfere with the types of driving necessary during the course of police patrol. Officers relayed in the staff report that “on occasion, the Teslas automatically stop when an officer attempts to pull off to the side of the road to approach vehicles or people.” […] “I am very proud that we tried the Teslas, and not everything works,” said Council member Betsy Nash.

Proud that not everything works? I see an easily avoidable $350K mistake, which doesn’t bring pride to mind. They could have bought electric bicycles and still had a better outcome.

Officers also said because of the vehicle’s makeup, they weren’t really able to be used for off-roading or jumping curbs. When officers pull someone over, the Teslas have automatically stopped instead of continuing to pull behind another car. Doors also close automatically with even a slight incline, Paz’s report says. There have also been issues with the cars remaining unlocked when an officer is near the vehicle but has walked away from it.

The report leaves out a lot of details, while also giving the impression that Tesla was not cooperative at all. Consider Elon Musk getting all the data from police cars, including video streams. And worse, imagine him pressing a button to prevent police using his robots from pursuing someone that Elon Musk wants to protect (e.g. anti-government right-wing extremists), the same way he does with his control over Twitter.

Presumably the Council still are happy with their Ford and Chevy decision made at the same time as the Tesla debacle. The next police car for Menlo Park is reported to be a new Chevy.

UK Tesla Kills Pedestrian

Almost all Tesla crashes I’ve read about lately involve dangerous drivers in their 20s.

Norfolk Police said a white Tesla Model 3 struck a man in his 30s on South Beach Parade at 21:43 BST on Tuesday. The force added that a pedestrian “sadly died as a result of his injuries” and the driver of the Tesla, a man in his 20s, continues to be questioned by officers.

It’s notable because the promise of CEO Elon Musk was that Tesla would be the safest car on the road by making driving safely more easy. It seems to have done the exact opposite and made the worst drivers far more deadly.

Anthropic Leads AI Safety Regulation as OpenAI Misses the Mark on California Bill SB1047

California’s SB-1386 revolutionized data breach notifications globally, proving that state-level regulation can drive widespread change even without federal action. I was on the front-lines in Silicon Valley detecting and preventing breaches in 2003 and remember a positive sea-change after California passed a law, like it was yesterday. In this light, OpenAI’s opposition to California’s proposed AI regulation SB-1047 (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act) appears shortsighted and out of touch with very important recent historical precedent.

While OpenAI cites concerns about innovation and talent retention, these arguments echo past objections to groundbreaking regulations that have since become industry standards without problem. Their hollow-sounding stance seems more like an attempt to avoid oversight rather than a genuine concern for progress.

Conversely, Anthropic’s evolving, more receptive approach to the revised bill demonstrates a nuanced understanding of regulatory dynamics and the potential for well-crafted legislation to foster responsible innovation.

Importantly, technologies like W3C’s Solid project (solidproject.org) already offer innovative solutions that could help AI companies meet the proposed bill’s requirements. Solid’s decentralized data architecture provides users with unprecedented control over their data, enabling easy opt-out mechanisms, transparent data usage tracking, and even simplified implementation of “kill switches” for AI systems.

OpenAI’s failure to recognize or advocate for such existing technological solutions further underscores their misalignment with forward-thinking approaches to AI governance. By embracing rather than resisting thoughtful regulation and leveraging innovative technologies like Solid, companies can position themselves at the forefront of responsible AI development.

In essence, Anthropic’s engagement with regulators and openness to adaptive solutions aligns with the historical pattern of California’s regulatory leadership driving positive industry-wide change. Meanwhile, OpenAI risks finding itself on the wrong side of history, missing an opportunity to shape responsible AI practices that could become global standards.