Category Archives: Security

The New Tobacco Playbook: How AI Venture Capitalists Sell Us On Digital Incarceration

In the 1940s, tobacco companies paid doctors $5,000 (equivalent to $60,000 today) to recommend cigarettes as a treatment for throat irritation. They hosted lavish dinners, set up “hospitality booths” at medical conventions, and even had cigarette packs embossed with doctors’ names. All to exploit public trust in medical authorities and calm growing concerns about smoking’s health risks. By the 1950s it was clear tobacco was causing cancer and poised to kill, yet the tobacco companies threw heavy propaganda punches so successful they promoted willful conspirators all the way into the White House.

Ronald Reagan engaged in targeted ad campaigns that exploited public vulnerabilities. Tobacco and cigarette use resulted in killing tens of millions of Americans before message integrity through accuracy in health information could be restored.

Today, venture capitalists are running a remarkably similar playbook with AI –- but instead of selling us on inhaling smoke, they’re pushing us to tune out and “upload ourselves” into their proprietary AI systems.

Market Manipulation as a Playbook

A recent article by a16z partner Justine Moore titled “Export Your Brain: How I Uploaded Myself to AI” perfectly exemplifies harmful propaganda tactics. Let’s break down the parallels:

  1. Trusted Voices: Just as tobacco companies leveraged doctors’ credibility, VCs are positioning themselves as tech “experts” whose advice we should trust about AI adoption.
  2. Harm Minimization: Moore writes that always-on AI surveillance “might sound invasive today, but so did capabilities like location sharing.” This mirrors how tobacco companies dismissed health concerns as temporary squeamishness.
  3. Rebranding Unhealthy Dependency as Health: Where cigarette companies promoted smoking for “throat comfort,” VCs are selling AI as essential for mental health and self-understanding. Moore suggests using AI for therapy-like functions and “better understanding yourself.”
  4. Hiding Financial Motives: Just as doctors’ cigarette recommendations were bought with fishing trips and dinners, venture capitalists promoting “AI brain exports” stand to profit from their portfolio companies’ success.
  5. Building Social Pressure: The article implies this technology is inevitable and that everyone will eventually use it – classic tobacco industry tactics to create social pressure for adoption.

Simple a16z Text Analysis

  • No discussion of privacy protections
  • No mention of security measures
  • Absence of data rights framework
  • Limited mention of consent mechanisms
  • No mention of data ownership after sharing

Many data misuse examples from recent history (like Cambridge Analytica or Clearview AI) reinforce why these concerns aren’t theoretical. AI regulation efforts (like EU’s AI Act) show what protections look like, yet somehow these concepts are completely ignored by the article promoting the opposite.

False Choice Fallacy of Feudalism

Most disturbingly, the gloatingly harmful narrative presents a false choice that mirrors another crisis facing Americans today: the corporate capture of housing. Just as private equity firms are buying up residential properties and forcing people into instability of permanent renting, venture capitalists try to convince us to give up our autonomy and “rent” our digital selves from their incarcerating AI platforms.

Consider these obvious parallels:

Housing Market AI “Brain Upload” Market
Investment firms buy up residential properties en masse VCs fund platforms that capture personal data
Convert homeownership opportunities into rental-only options Convert what could be personal data sovereignty into “AI services”
Steadily increase rents, extracting more wealth over time Steadily increase dependency on their platforms
Create artificial scarcity in the housing market Create artificial barriers to personal data control
Force people into permanent tenancy rather than ownership Force people into permanent digital tenancy rather than ownership

In both cases, we’re seeing the same playbook from elites who prioritize investment gains at others’ expense. They convince unwitting victims that ownership (whether of homes or data) is supposedly too complex and expensive for individuals to manage, offering entrapment disguised as “convenience” – even though it’s deliberately designed to cost more in the long run. This surrendering of control to corporations is affectionately known in Silicon Valley as building “digital moats,” except instead of protecting users from harm, these moats are used in reverse to prevent escape – technology deliberately designed as an exit-barrier for extracting wealth from those who’ve been “uploaded.”

Better Path? I’m Glad You Asked: Freedom of Movement and Ownership

Just as the answer to the housing crisis isn’t surrendering to massive investment firms that generate corporate landlords owning everyone’s space, the solution to AI isn’t surrendering our digital selves to venture-backed cheerleaders of societal disruption. We already have technology for genuine digital ownership that respects centuries of human progress in liberty and justice. The W3C Solid standards exemplify a framework that provides essential protections against exploitation while enabling all the benefits of AI:

  • Store personal data in secure “data wallets”
  • Control exactly what information is shared and with whom
  • Revoke access when appropriate for safety and security
  • Keep data portable, not “uploaded” forever into any single company’s control

How the Solid Protocol Works

Users have personal data wallets for their entire digital lives — like a modern take on safe deposit box revolution in 1800s asset protection and personal wealth preservation:

  • Your data stays encrypted on servers you choose
  • Apps request specific permissions to access specific data
  • You can revoke access at any time
  • Data can be moved between providers easily

This fundamentally differs from unhealthy “brain upload” models that encourage victims to fall into systems designed to exploit them while preventing escape.

Think about the difference between owning a brick house with a lock on the door and a roaring fire in the chimney… and a pitch by the wolves of Palo Alto to leave and voluntarily upload yourself into a corporate-owned campsite with straw huts.

The better to see you…

Maybe uploading isn’t safe unless there is evidence of safety, like at all?

Don’t Forget the Fairy Tale Was a Warning

When tobacco companies manipulated doctors to promote smoking, they caused incalculable harm to public health, killing tens of millions of people for profit with little to no accountability. Today’s venture capitalists are pushing for something potentially even more invasive: the voluntary surrender of our most intimate thoughts, memories, and psychological patterns to proprietary AI systems. Many millions more will die in such a system, without exaggeration.

The promise of AI assistance doesn’t require surrender and regression to the worst chapters in history. We can build systems that respect human autonomy and data rights. But first, we must recognize clear manipulation tactics used (even unintentionally, lunch deals on Sand Hill Road often are devoid of reality) to push us toward centralized repressive regimes of corporate data capture.

Action Speaks Louder Than Words

1. Demand transparency from those with financial interests in AI adoption
2. Deploy data sovereignty technology like W3C Solid
3. Invest in development of open standards and protocols
4. Refuse products that fail to respect data rights
5. Measure the promise of “AI companions” against history and call a spade a spade

Remember: Just as people unfortunately took health advice from doctors paid by tobacco companies and societal harms rocked their world with tragedy, be wary of advice from anyone who appears invested in surveillance used for digital human trafficking.

Your digital self deserves a home you own with locks you control, not a ruse to relocate you into a temporary cell with guards who answer to an unaccountable system of elite ownership.

Take it from someone who worked three decades in the VC funded corridors of Silicon Valley disruption and dislocation, as I’ve seen it all first hand. From the forefront of defending users from harm, this is a moment of danger that should not be underestimated.

Privacy Implications

The a16z article’s casual approach to privacy concerns, comparing AI brain monitoring to location sharing, overlooks the unprecedented scope and intimacy of the data collection proposed. This technology would have access to users’ thoughts, memories, and personal interactions — a level of access that demands rigorous privacy protections and strong ethical frameworks that are entirely missing in a suspiciously saccharin fantasy pitch.

Let me explain this another way, looking at similar but different headlines. We are talking here about human behavior when it comes to power and technology. Hundreds of thousands of people are said to have died in high-technology control and capture systems around Damascus. Tens of thousands who were “uploaded” into detention centers are today wandering out, unable to even say their name or explain the horrors they experienced under an Assad regime of population control. Assad meanwhile is said to be unimaginably wealthy, lounging in a $40m penthouse of Moscow. Such tragic stories of untold suffering and death without justice undoubtedly will be repeated in the world of AI if the wrong people are believed.

History has shown that such abusive surveillance systems don’t just happen “over there”, because they also happen right here at home… in Palo Alto.

In the 1950s-60s, the CIA’s infamous MK-ULTRA program ran experiments on unwitting Stanford students, offering them drugs and prostitutes while secretly studying them for research into psychological manipulation and informant control methods. Funding technology to expose people’s thoughts? Finding ways to “offer” opportunities for people to be “understood” better”? Sound familiar? Palo Alto schemes masquerading as innovation while seeking ways into people’s heads have a particularly important precedent.

Like those CIA researchers, today’s venture capitalists frame their surveillance as voluntary and beneficial. But there’s a clear through-line from MK-ULTRA’s “consent” model to modern tech’s manipulative push for “uploading” our minds. Both serve powerful interests seeking profit from psychological control, only now it’s packaged as a consumer product rather than as drugs and prostitutes in a front for government research. The core abuse theory remains the same, exploiting susceptibility to offers of “benefits” to gain access to human consciousness. For what purposes?

While the NYT warns about teen suicide from chatbot use, and CNN similarly documents a mother’s pain from death attributed to unregulated AI, this a16z author flippantly promotes rapid adoption of systems by saying she only loves what she sees when only looking at benefits to herself.

Show me where a16z has any record of guardrails intended or expected to prevent societal manipulation and control, let alone any attempt to demonstrate sane intent with this article itself.

Term Frequency in a16z Article

Term Mentions Context
Privacy 1 Only mentioned in legal disclaimer to protect author
Data Control 1 Brief mention of user control over data access
Security 0 Not even a single mention of security
Data Rights 0 User rights completely omitted

OH Tesla Kills One in “Veered” Crash

Another day another preventable Tesla death from a single vehicle crash, after failing to stay on the road.

The Canfield Post of the Ohio State Highway Patrol are investigating a fatal crash.

The crash occurred approximately at 4:28 p.m. on West Calla Rd. Monday evening.

Kyle Soli of Salem was operating a 2025 Tesla Model 3 and travelled off the right side of the road, striking a mailbox, ditch, culvert, multiple trees and overturning several times before coming to rest on all four tires.

Soli was the only occupant in the car, and was transported to the hospital where he died from his injuries.

A 2025 model? Who in 2024 is looking at the data and buying a 2025 model Tesla? This doesn’t bode well for anyone even thinking about buying a new Tesla.

Tesla vehicles suffer fatal accidents at a rate that’s twice the industry average, according to a new report.

Initial statements by police point to exactly the kind of accident Tesla’s CEO claims their technology should prevent. The driver’s Model 3 left the road and rolled multiple times after striking several objects.

The circumstances described are an alleged impairment, and advocating for seatbelt use. These highlight a dangerous contradiction: Tesla markets its driver assistance features as safety enhancers while its CEO publicly promotes the idea that their cars can safely transport sleeping drivers. This messaging is known to encourage dangerous behavior, adding to mounting concerns about Tesla’s safety record.

Insurance Institute for Highway Safety (IIHS) data shows fast rising death rates for the Model 3 directly contradict Tesla claims of superior safety.

Source: IIHS

Clausewitz Paradox: When Thinking About Thinking Becomes Routine

Military professionals love a good Clausewitz discussion, especially looking at this past week in Syria. His trinity of people, army, and government has become almost liturgical. It’s the kind of a comfortable framework we apply to everything from counterinsurgency to cyber warfare. But there’s an irony here that Clausewitz himself might appreciate: Our very reliance on his framework demonstrates the human tendency to turn dynamic thinking into static routine.

Perhaps Clausewitz’s best insight, not unlike what has been found in every other profession in the world, was that warfare exists in constant tension between:

  • What can be systematized (tactics, drills, logistics)
  • What requires judgment (strategy, adaptation, creativity)

But here’s the meta-lesson: The way we invoke Clausewitz has itself become a routine. We’ve turned his warning about the dangers of routine thinking into… a routine way of thinking.

The crystallization of dynamic thought into static procedure appears like a pattern everywhere in human endeavor. Scientific methods become checklist science. Medical diagnosis becomes search engine symptom matching. Strategic planning becomes fill-in-the-blank templates.

The true lesson of Clausewitz thus shouldn’t be reduced to his trinity or his maxims. It comes from recognizing a balance that is often lost, that even our frameworks for handling complexity can become cognitive crutches. His work should be a cradle for military thought, not its grave. The moment we think we’ve fully understood Clausewitz is the moment we’ve missed his point entirely.

I submit that the best way to honor Clausewitz is to recognize when we need to move beyond him, as he argued that each age must write its own book about war. The most dangerous routine might be our routine ways of thinking about how to avoid routine thinking.

The Journal of the United States Artillery once put it like this:

Source: Journal of the United States Artillery, Volume 81, Page 293, 1938

This quote perfectly captures a recursive rule about not following rules slavishly. And the source makes it even more powerful: Grant often was criticized by his contemporaries for being “unscientific” and not following accepted military wisdom, yet he was unquestionably the most successful general of the Civil War, if not all American history.

Even the way we think about thinking needs to avoid becoming dogmatic. The real art is maintaining the tension between structure and adaptability, knowing enough to be competent but remaining flexible enough to be creative.

I’ve heard this as the healthy mental river flow, where we must avoid becoming tangled upon either bank. One is chaotic and forever giving way, the other is rigid and unforgiving. The irony is that this too could become a rigid formula if we’re not careful!

And for what it’s worth, the seditious Confederate General Lee’s rigid adherence to offensive doctrine, a fixation on decisive Napoleonic style measurements, led to several catastrophic decisions.

  • Favored aggressive offense to expand slavery, instead of the defensive tactics that were far more strategic to preserve slavery
  • Focused on his personal stake in Virginia theater operations despite the war’s center of gravity shift west
  • Continued agitating for decisive battle outcomes even after Gettysburg showed this fatally flawed

Grant, by contrast, showed remarkable adaptability and thinking 100 years ahead of his time.

When Grant encountered a problem at Vicksburg, he didn’t just try a different tactical approach, he totally innovated into what was possible. After failed frontal assaults, he executed one of the most audacious campaigns in military history: he marched his army down the western bank of the Mississippi, ran gunboats and transport ships past the Confederate batteries at night (a move considered suicidal), crossed back to the eastern bank well south of Vicksburg, and then lived off the land while cutting loose from his supply lines entirely.

This was mind-bending for the era. Armies were supposed to maintain their supply lines at all costs. Instead, Grant’s troops carrying just five days of rations marched through enemy territory for two weeks, fighting five major battles and confounding both the Confederates and his own superiors. When Lincoln heard of this, he said:

I think Grant has a thought. He isn’t quite sure about it, but he has it.

At Cold Harbor, after suffering heavy casualties in frontal assaults (observing them as mistakes), Grant didn’t retreat to lick his wounds like his predecessors. Instead, he secretly moved his entire army across the James River — a force of 100,000 men with wagons, artillery, and supplies — using a 2,100-foot pontoon bridge. The Confederates didn’t even realize he’d gone until his army was threatening Petersburg.

The Overland Campaign showed Grant’s grasp of both operational art and psychology. Previous generals had retreated after tangling with the “monster” Lee. Grant, instead, kept moving southeast. After each battle, his troops expected to retreat north. Instead, they’d get orders to advance by the left flank. This persistent southward movement had a profound psychological effect on both armies. Union troops began to see they were finally heading toward Richmond, while Confederate troops realized this enemy wasn’t going to quit at first bluster.

Even his staffing choices showed innovation. While other generals relied on West Point graduates, Grant promoted talented officers regardless of background. He elevated leaders like William Smith (originally a civilian vigneron) and James Wilson (who became a cavalry commander at 26) based on demonstrated ability rather than formal education. Perhaps due to his own “self-made” background, he dismissed patronage as irrelevant to performance.

Then there was his approach to intelligence gathering. Rather than relying solely on cavalry scouts and spies, Grant made extensive use of freed slaves’ knowledge of local geography and Confederate movements. This wasn’t just innovative, it echoed his dedication to human value and talent as transformative, recognizing the strategic value of local knowledge that others ignored due to racism.

These weren’t just tactical innovations, they represented a flexible yet practical way of thinking about the world. A fundamentally different path than what came before.

Lee remained fixated on winning decisive battles in a Napoleonic style, while Grant grasped how the Civil War was changing everything, becoming what we’d now call a “total war,” requiring an operational art that combined military, political, and economic elements… not unlike what we’ve seen in Syria lately.

The campaign that best exemplifies Grant’s touch of transformation was his strategic March to the Sea led by Sherman. While Lee was still obsessing about sitting in his tent for his boots to be shined for future battlefield glory, Grant understood that Confederate resistance depended on both military force and civilian will. The March to the Sea was about demonstrating the Confederacy’s aggression as weakness, revealing an inherent inability to protect itself.

Grant had likely not been exposed to Clausewitz, but the Prussian theorist would have recognized in Grant’s strategy the targeting of the enemy’s center of gravity the key to his resistance.

The rise of cyberwarfare, AI, and hybrid warfare demands the kind of adaptable systemic thinking Grant exemplified rather than Lee’s routine and doctrinaire (e.g. racist) approach. So the next time someone waves an ISIS or Confederate flag, just think about it… because it stands as evidence they don’t.

Tesla Cyber Dumbtruck Easily Defeated by Pothole

Yo, yo, yo, everyone, check this out:

Tesla made a Dumbtruck. Not a DumPtruck. A DUMB truck.

It looks like a robot tried to draw a car but could only move in straight lines ’cause adding curves would take advanced math. Someone’s toddler really opened up Microsoft Paint, drew three lines, and daddy said “my baby so perfect at drawing, ship it!”

Wait, did I assume there was a real child interaction? This is what happens when you let an artificial baby called Grok design a car using only ASCII art. ◢▢◣ “Perfect, ship it!”

Tesla’s drugged-up white supremacist hyperventilating CEO has been flying big fancy private jets full of his bodyguards around to talk up his “bulletproof this” and “apocalypse-ready that,” like he believes non-white people using public transit are brain-eating zombies. Then a pothole just… broke his vision for a comfy white-enclave nation-state called Mars Technocracy.

Like, a regular degular pothole.

That’s wild. That’s like if I paid a million dollars to ride in his submarine to the Titanic and on the way down it imploded killing me and my kids. Can you imagine that? You telling me a “future truck” just got taken out by something that’s been messing up cars since we stopped riding horses? Water turns out to be wet?

That’s embarrassingly dumb. Dumbtruck.

And the wildest part? Teenagers in the ’70s were making better cars than this. Like, actual teenagers fifty years ago could do better. While they were probably listening to disco and dealing with whatever wild fashion choices people were making back then, they still managed to not have their cars split in half from hitting a… pothole.

Any comparison to ’70s teenage engineers kills me because it’s true. Those kids built cars in their spare time with actual metal, actual curves, and actual durability, tripping over bell-bottoms and listening to Earth, Wind & Fire. Yet their cars still run and run, unlike this cyberorigami techbro experiment out here folding itself in half at the first touch of reality.

You really gotta wonder what Tesla tests against. Seems like they prepared for everything except… actual roads. That’s like waterproofing your umbrella but forgetting to make it openable.

*sips tea*

You know what this means though? Potholes are officially more powerful than the best ideas from the Tesla cult’s supreme leader Elon Musk. Potholes just got a power-up in the game of life. They need to update their status:

“Just destroyed a $100,000 truck. Harder than a Rhodesian plan for Mars. 💪”

For real though – imagine spending supercar money on something “future-proof” that got defeated by the 1800s. That’s like buying a spaceship that makes it harder to get to space.

That’s like a steroid-addled puffed up body-builder losing a fight to a sidewalk crack. Tesla can’t talk trash about other cars when theirs are literal trash, like throw it away and get something else if you want to actually drive A to B.

How you gonna pull up to a Honda Civic or a Toyota Pickup when your Tesla just got bodied by some missing asphalt?

Source: Ford

Japanese and German automobile manufacturing honed-under post-WWII American military occupation produced something low-cost to run 300,000 miles looking like it’s been through a war zone, because they literally were hard-won post-war outputs.

While a soft-palmed South African kid who tucked-tail to retreat scared from the fall of apartheid into America with bags of cash to take over Tesla… just lost a fight with a hole. Not even any special hole. His best anti-woke ideas can’t even handle a regular Tuesday pothole.

What’s next, you’re going to tell me the heater doesn’t work in a Canadian winter? Have to wear your outside clothing when you ride inside one?

Tesla Cybertruck Immediately Dies in Canadian Winter – Owner Bricks the Truck Trying to Use the Defroster

How’s that for cool? The Cybertruck is so anti-woke it can’t even wake up. Finally, a luxury vehicle that lets you experience the elements from the inside.

It’s like Tesla spent all their engineering points on “look very cyber” and forgot to allocate time for “basic car stuff.” Every other car company with engineers must be looking at this like “Y’all good over there? Need some notes on how cars work?”

Tesla’s robot saw a pothole and said “Time to transform… into two pieces that don’t work.” Tesla’s robot saw snow fall and said “Time to transform… into a block of ice.”

The price of South African anti-woke racist stock surely is going straight up on this news.