Another day, another Tesla driver who follows Elon Musk’s very unique and specific directions since at least 2015 to fall asleep in their seat.
On Friday, April 18, 2025, at approximately 12:29 AM, Troop H – Hartford Emergency Dispatchers began receiving multiple “911” calls about a Tesla traveling significantly below highway speeds on I-91 south in Wethersfield. Additional “911” callers reported they were observing the operator to be slumped over and presumably asleep behind the wheel. Troopers located the vehicle and observed it operating at approximately 30 mph with its four-way hazard lights activated in the center lane. Troopers also observed the operator slumped over, and presumably asleep behind the wheel while the vehicle was driving.
This incident raises critical questions about Tesla’s approach to safety protocols. When driver incapacitation is detected, why behave in a known illegal manner (C.G.S. 14-220(a) – Too Slow Speed) in an active traffic lane rather than safely pulling to the shoulder? The vehicle created a dangerous situation by setting itself to operate as a 30mph hazard in the center lane of an Interstate.
Notably, the 2015 Google driverless incident in Mountain View was for driving too slow, so this is hardly a new problem.
The multiple 911 calls underscore a fundamental design flaw: Tesla’s AI recognized something was wrong (hence activating hazard lights and reducing speed) but lacked the decision-making capability to follow the law and remove itself from traffic flow. The illegal AI behavior represents a concerning “fail-dangerous” rather than “fail-safe” approach to autonomous driving.
For a company that frequently (fraudulently) touts AI capabilities, the inability to implement such basic safety logic—pull over when driver is unresponsive—represents a significant gap between dangerous propaganda and operational reality. The incident demonstrates how even partial automation by unethical companies can create new risks because their safety protocols are so weak.