NYC Subway Hero: Good Samaritan Versus Meta Glasshole

One good Samaritan’s silence is part of why her actions are resonating.

A New York subway rider is going viral after a TikToker accused her of breaking his Meta AI glasses, a moment that instantly made her a folk hero…. The eyewear, which can discreetly record video, has been criticized as a creeping surveillance threat. …the internet has already taken her side, celebrating her as the anti-AI vigilante of their dreams.

No verbal engagement, no explanation, no negotiation. Just direct action against the surveillance infrastructure and then walk away.

She’s been compared to “The Butlerian Jihad“, which Herbert wrote as humanity’s violent rejection of thinking machines after they’d been used for oppression.

We’re watching cultural groundwork being laid for that framing. And there are serious philosophical foundations to consider as well.

Surveillance Ethics: Harm vs Defense

Ethical Framework Surveillance as Harm Defensive Response
Assault (Legal Definition) Assault is the threat or apprehension of harm, not just physical contact. Covert recording creates reasonable apprehension: your image fed into facial recognition, your home address exposed, stalking enabled. The 2024 Harvard demo proved these aren’t hypothetical harms—they’re documented capabilities. Breaking the device is defense against ongoing assault, not initiation of assault. You cannot “assault” a weapon.
Bodily Autonomy Your image, likeness, and presence in space are extensions of your person. Capturing them without consent is violation of bodily autonomy—the same principle underlying prohibitions on non-consensual photography, revenge porn, upskirt laws. Defending bodily autonomy against technological violation is ethically equivalent to defending against physical violation.
Consent (Bioethics Standard) Informed consent requires: disclosure, comprehension, voluntariness, competence. Covert surveillance by design eliminates all four. The indicator light theater—easily defeated with stickers—demonstrates Meta knows consent is impossible and chose to proceed anyway. You cannot retroactively consent to surveillance you didn’t know was happening. “Ask nicely” presupposes knowledge you’re being recorded.
Harm Principle (Mill) “The only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others.” Surveillance that enables doxing, stalking, and harassment is demonstrable harm. Preventing harm to yourself and bystanders falls squarely within Mill’s framework. Her liberty to destroy his glasses is justified by his harm to others.
Necessity Defense (Legal) Property destruction is justified when preventing greater harm. Breaking a window to rescue someone from a fire. Destroying a weapon being used to threaten. $300 device destruction vs. enabling real-time doxing of strangers. The proportionality calculation is obvious.
Kant’s Categorical Imperative “Act so that you treat humanity, in your own person and in that of another, always as an end and never merely as a means.” Using strangers as “content” for entertainment—”sounds I thought were hilarious”—is treating humans as means. She refused to be a means to his content. She acted as an end in herself.
Duty to Rescue Many jurisdictions impose legal duty to assist others facing serious harm when you can do so without unreasonable risk. Bystanders were also being surveilled. Destroying the surveillance device protected not just herself but everyone in that subway car. Good Samaritan action.
Feminist Ethics (Vulnerability & Power) Technology magnifies existing power asymmetries. Facemash targeted women specifically. Meta’s platform has documented harms to women: enabling stalkers, domestic abusers using location data, teen girls’ mental health. Pattern of gendered harm is established. Resistance to gendered technological harm is ethically continuous with resistance to gendered physical harm. Same principle, different vector.

The Inversion Exposed

Glasshole View Ethical Reality
“She assaulted me” Glasshole was committing ongoing assault via surveillance; she defended
“She destroyed Glasshole property” She neutralized a device being used to harm
“She should have asked nicely” Consent must be sought by the Glasshole acting on others, not demanded by the victim post-hoc
“$300 Glasshole damage” Harm of surveillance (doxing, stalking, harassment) >> property value
“Glasshole filed a police report” Legal system designed before these harms existed; law lags ethics
“Help Glasshole find her” Attempting to use surveillance apparatus to punish resistance to surveillance

The Zuckerberg Through-Line

Era Action Framing Outcome
Facemash (2003) Used technology to non-consensually rank women’s bodies “Prank” Became billionaire
Meta Ray-Bans (2024-25) Sells technology enabling non-consensual surveillance; user films strangers; woman resists “Product feature” She’s framed as criminal

The ethical violation of Facebook has always been the same since its Harvard origins. It’s gone from Facemash to Face-Smash, as the immoral technology has been allowed to scale against humanity.

Waymo is Murder: NYT Pushes VC Doctor Pills to Erase the Dead

The New York Times has published sophisticated propaganda, structurally designed to manufacture consent with the murder machine known as Waymo.

The Data on Self-Driving Cars Is Clear. We Have to Change Course.

The entire function of this PR piece is to make us stop counting the ways Waymo kills by overwhelming us with the ways humans kill. That is cynically erasing the dead.

He never addresses correlated failure: I know he didn’t read this blog because my core point, fleet-wide bugs aren’t comparable to heterogeneous human error, doesn’t appear. It’s the most important methodological objection and he doesn’t even touch it.

The clinical trial analogy is intellectually dishonest: Actual trials: randomization, blinding, independent data collection, FDA oversight. Dr. Waymo: guess who controls all the data, self-selects operating conditions, and has no independent verification? He’s borrowing the epistemic authority of medical research while violating every principle that makes medical research trustworthy.

The comparison is rigged from the start: “Human drivers on the same roads” sounds controlled, but humans drive those roads in rain, at night, hungover, distracted, in vehicles with failing brakes. Waymo operates in pre-mapped ideal conditions and pulls over when confused. It’s comparing a curated highlight reel to raw game footage.

He buries the conflicts: Healthcare VC making deployment arguments. The disclosure appears after 1,500 words of persuasion. In actual medical publishing, conflicts go at the top.

He acknowledges the accountability void then crashes into it: “We need the denominator, not just the numerator”—but who audits the numerator? Waymo. Who controls the crash definitions? Waymo. Who decides what gets reported? Waymo.

The timing is reputation management: Published December 2 while Waymo is bleeding from school bus violations, KitKat, the police standoff. This is crisis PR with an MD byline.

A doctor is arguing we should trust corporate self-reported data over democratic accountability mechanisms, in the name of public health.

That’s obscenity.

A venture capitalist is using his medical credentials to make dangerous technology deployment arguments that are a profit-driven threat to public health.

Game over.

He may be a doctor, but I study disinformation.

Related: “Glyphosate safety article retracted eight years after Monsanto ghostwriting revealed in court”.

Bottom Line on Kohler Toilet E2EE Claims

A security researcher, supposedly exposing privacy risk in a Kohler networked toilet, didn’t get to the bottom of anything.

But seriously, what is this crap?

The initial issue with Kohler using the term “end-to-end encryption” is that it’s not obvious how it could apply to their product. The term is generally used for applications that allow some kind of communication between users, and Kohler Health doesn’t have any user-to-user sharing features. So while one “end” would be the user, it’s not clear what the other end would be.

The researcher takes issue with the term E2EE, despite an already-compromised meaning, pretends it has a pure canonical definition, then catches Kohler failing to meet his fictional standard.

That’s a definitional sleight of hand and for what end, exactly?

Whatsapp is a Facebook product that falsely claims E2EE. When they say “we can’t read your messages” they actually mean they can read your message when your contact taps a Report button, and they harvest all the metadata, and cloud backups may be accessible, and….

The researcher even cites Whatsapp as an example of E2EE. That’s like saying Exxon is an example of how to protect the environment. Marlboro is an example of healthy living.

At least Kohler is being plain and honest about being an end of their encryption. They say they will use the data and why. What the hell are huge warehouses of Whatsapp staff doing with all the data they harvest from bogus E2EE, which apparently even fooled this researcher into promoting?

Talk about burying the lede: if you want to hunt vulnerabilities, the Kohler AI training angle is actually interesting research! That’s where you could say it’s behind in the privacy department.

What happens when the de-identified stool image datasets get breached or sold? What’s the actual re-identification risk? What are the clinical validation standards for the insights they’re selling?

Instead we got “users at a company who use your data can access your data.

No shit.

A real security/privacy analysis of the back-end architecture was available and the researcher chose definitional games instead. I mean, if you want to hate on Kohler, there is plenty to dislike without cooking up encryption semantics.

The subscription model is $600 for hardware that becomes a brick if you stop paying or they shut down. That’s the enshittification lifecycle applied to your actual toilet.

De-identification is hard, and this is distinctive dumps. Stool images are biometric-adjacent. The claim that de-identified toilet photos can’t be re-identified is… doubtful.

The gut health insight market is largely unvalidated. What evidence-based intervention follows from the data? “Your stool is different today” brings what actionable change beyond what you can detect naturally already? It’s quantified self for a process that mostly works fine without surveillance.

Attack surface expansion. Your toilet worked fine before. Now it’s a networked sensor with dependencies, firmware updates, and an app that needs permissions. Every connected device adds more liability; this one points at you with your pants down.

Subscription healthtech has misaligned incentives. They need you anxious enough to keep paying but not so alarmed you see a real doctor. That’s a weird optimization target to sit on.

And so forth… as I’ve said on this blog about “log” data in waste water, for at least ten years if not more?

Log analysis for wastewater plants

AZ Tesla in “Veered” Crash Head-on Into Dumptruck

The report makes it clear, Tesla caused a sudden veered acceleration event like a suicide attack.

Based on the initial investigation, the driver of the Tesla was originally traveling eastbound on Cactus Road before veering left and driving the wrong way into the westbound lanes of Cactus Road, where it collided head-on with the dump truck in the curb lane.

The video makes it even clearer.