The police report indicates a mother and her 26 year old son were fighting when he floored it on a curve and killed a woman.
He allegedly took a curve too fast, hit several trees, hit a median, and then went airborne before colliding with the driver’s side of a Porsche that was sitting at the light on Turtle Creek Boulevard. The impact flipped the Porsche, and the Tesla landed on top.
The driver of the Porsche, 29-year-old Emma Hackney, died in the crash. A passenger in the Porsche was critically injured, and so was Petty’s mom, who was with him in the Tesla.
According to the affidavit, Petty told officers he had crashed because his mom was trying to grab his phone, since he was driving with the phone in his hand.
Preston Petty is the 2025 Tesla owner’s full name, and police also charged him with intoxication while driving his mother. So he’s facing intoxication manslaughter and assault.
I love the analysis that just dropped from famous market analyst and investor Michael Burry.
…the Elon cult was all-in on electric cars until competition showed up, then all-in on autonomous driving until competition showed up, and now is all-in on robots – until competition shows up.
Police reports mentioned “wintry” weather as a factor in the crash.
…a Tesla changing from the left lane to the right lane struck another tractor-trailer. This resulted in the Tesla moving back into the left lane and striking the first tractor-trailer. Police said the driver of the Tesla died at the scene.
Lawn darts got banned in 1988 after three deaths. The CPSC pulled them because the harm was obvious and the product had no safety design, just a warning label that parents ignored.
OpenAI Sora 2 is structurally worse.
The harm isn’t accidental trajectory, it’s the intended function operating exactly as designed. A new Ekō report found the system successfully generated harmful content 61% of the time under controlled testing. This isn’t a filter that struggles to stop harm, this is a harm generator reliably producing it.
OpenAI itself has admitted that safeguards degrade over long interactions, acknowledging that engagement-driven design can directly undermine safety.
That’s a confession their business model of producing harm is incompatible with the safety claims.
California AG Rob Bonta said he paid “very close attention” to child safety policies, calling for zero tolerance, when he approved OpenAI’s restructure in October.
That’s what “very close attention” missed.
The algorithmic recommendation layer of Sora makes these lawn darts in kids’ hands more like jet-powered missiles. Ekō researchers didn’t have to search for antisemitic caricatures and school shooter content, because it was pushed to new teen accounts through the “For You” page within three hours of browsing.
Three hours from hello to have you heard about killing Jews.
That’s not user-generated harm, that’s platform-amplified harm by OpenAI with the mass atrocity accelerant built into their distribution mechanism. Violence prompts without race specification disproportionately generated Black subjects. We are seeing the kind of encoded prejudice known to accelerate crimes against humanity.
“Disinformation weapon” framing seems appropriate here, in a specific technical sense: the platform generates hyperrealistic videos that depict events that never happened, starring people who never consented, and distributes it through engagement-optimized algorithms to an audience that includes 13-year-olds. That’s the capability profile of an influence operation toolkit, military-grade information warfare, handed to anyone with a free account.
The Khmer Rouge armed teenagers with the latest weapons technology to destroy a country from within
Speaking of free, OpenAI is burning cash to maintain market position. The engagement optimization is existential, beyond incidental. The jet-powered lawn darts being shipped to American children to kill Blacks and Jews aren’t a design flaw, they’re the OpenAI profit strategy.
a blog about the poetry of information security, since 1995