Arvid Lunnemark, one of the 2022 MIT mathematics graduates behind the Cursor product that may be looking at all your code, in 2021 wrote about achieving “complete privacy” through cryptographic means. Looking at his published engineering principles reveals exactly why his approach to privacy is so concerning:
1. Deal with the mess.
2. Let different things be different.
3. Always use UUIDs.
4. Delay serialization.
5. Look at things.
6. Never name something `end`. Always either `endExclusive` or `endInclusive`.
7. When a method is dangerous to call, add `_DANGEROUS_DANGEROUS_DANGEROUS_BECAUSE_XYZ` to its name…
8. Good tools are critical… If there isn’t a good tool, make one.
This reads like an inexperienced life answer to happiness. It’s the least compelling answer to privacy: clean, organized, tool-focused, and utterly disconnected from reality of real world communication.
The tone is reminiscent of Douglas Adams’ “Hitchhiker’s Guide to the Galaxy,” where the supercomputer Deep Thought calculates the answer to life, the universe, and everything as simply “42”—technically “correct” and yet so obviously fundamentally useless without understanding the question.
Lunnemark’s approach to privacy embodies this same catastrophic mistake, openly and proudly (hey, he was still a student, stretching his new wings).
His principles show a mindset being taught or schooled that believes complex problems are to be reduced to technical solutions by building the “right” tools and labeling dangers “explicitly” enough. This isn’t just naive—it’s potentially harmful in tautological fallacies.
Privacy with better UUIDs or cleaner method names is like a bank vault with cleaner threads on its screws. Revolutionary? Pun intended, but not really. Safety from loss of privacy exists within centuries-old power struggles between individuals, economic or social groups, and states. It operates within power systems that incentivize imbalance for reasons well known. It functions very, very differently across cultural and political contexts.
When someone who graduates from MIT in 2022 proclaims the year before that they’ve found the answer to privacy through better cryptography, they’re giving us their “42”—a solution to a problem they haven’t properly understood.
Such technical reductionism has real consequences. The whistleblower who trusts in “complete privacy” might face legal jeopardy no cryptography can prevent. The activist who believes their communications are “cryptographically completely private” might not anticipate physical surveillance, economic coercion, or infamously documented rubber-hose cryptanalysis.

The inexperienced quick-fix engineering mindset that treats privacy as primarily a technical problem is dangerous because it creates false security. It promises certainty in a domain where there is none, only trade-offs and calculated risks. It substitutes a fetish for mathematical proofs for proper sociopolitical understanding. You want more confidentiality? You just lost some availability. You want more availability? You just lost some integrity.
History repeatedly shows that technical absolutism fails. In fact, I like to point to neo-absolutist secret services (meant to preserve elitist power) as a great example of important history overlooked.
…the regime rested on the support of a standing army of soldiers, a kneeling army of worshippers, and a crawling army of informants was exaggerated but not entirely unfounded.
The German Enigma machine notably was undermined 10 years before WWII by Polish mathematicians because they understood and exploited weak supply chains and other human measures. PGP encryption has been theoretically secure while practically abused for being unusable because who has invested in the real issues? End-to-end encryption protects message content but still leaks metadata—as Lunnemark correctly identifies—but his solution falls into the same trap of believing the next technical iteration will be the first one to “solve” privacy.
Young engineers aren’t wrong to build better privacy tools—we desperately need better things. They’re better! But they need to approach measuring concepts of “better” with humility and interdisciplinary understanding. What’s good for one may be bad for many, good for many bad for one. Engineers in other disciplines have to sign a code of ethics, yet a computer engineer has none. They need to recognize that they’re not the first to think deeply about problems like privacy, and that philosophers, historians, economists, and political scientists have insights that algorithms alone cannot provide.
Key management is a much more interesting problem of social science than the mathematical properties of “better” material for making locks strong, or even the finer threads on a vault screw.
The answer to privacy isn’t 42, and it isn’t “complete cryptographic privacy” either. It’s a complex, evolving negotiation that requires technical innovation alongside deep understanding of human systems. Until our bright young minds grasp this, they risk creating even worse problems rather than real solutions.
Honestly, I’d rather be riding a mechanical horse than driving a car because legs are “better” than wheels in so many ways I’ve lost count. The “automobile” rush that pushed everyone and everything off roads has been a devastatingly terrible idea, inferior in most ways to transportation much older. Those promoting the “king’s carriage” mentality often turn out to be aspiring kings, rather than solving problems to make transit “better” for anyone but themselves.
Since we’re in the world of agentic innovation, and I suspect a 2021 MIT student blog post never saw a proper review, here’s an AI take on “what would they say”:
- Elinor Ostrom: “Lunnemark’s proposal demonstrates the danger of assuming universal solutions to complex governance problems. Privacy is not merely a technical problem but a common-pool resource that different communities manage according to their specific needs and contexts. It ignores polycentric systems through which privacy is negotiated in different social and political environments. Such a cryptographic approach imposed from above would likely undermine the diverse institutional arrangements through which communities actually protect their information commons. Effective privacy protection emerges from nested, contextual systems of governance—not from mathematical proofs developed in isolation from the communities they purport to serve.
- Hannah Arendt: “Lunnemark has mistaken a technical problem for a political one. Privacy is the precondition for political action, beyond the curation of an investigation’s authority. When he speaks of ‘complete privacy’ through cryptographic means, he betrays a profound misunderstanding of how power operates. Privacy exists in the realm of human affairs, not in math. The rush into ‘cryptographically complete privacy’ would be the illusion of protection while doing nothing to address the fundamental power relationships that determine who can act freely in the public sphere. Technical shields without political foundations are mere fantasies that distract from the real work of securing human freedom.”
- Michel Foucault: “How fascinating to see power’s newest disguise. This discourse of ‘complete privacy’ merely gives authority a new look, obscuring their unnatural role expanded. The author believes he can escape the panopticon through mathematical means, yet fails to see how this very belief is produced by the systems of privileged knowledge that determine what ‘privacy’ means in our epoch. His solution doesn’t challenge surveillance—it normalizes it by accepting its terms. True resistance requires not better cryptography but questioning the entire apparatus that makes privacy something we must ‘solve’ rather than a condition we demand. His technical solution doesn’t escape power; it merely reconfigures how power is exercised to likely benefit the shovel manufacturer in a graveyard.”
We need to find and fight against not just one specific proposal such as Lunnemark’s, but the entire mindset of technical solutionism whenever it creeps into technology circles that continue to operate without any code of ethics.