The Stupid. It Burns.
Six months of nation-state access to highly targeted networks simply because a widely-deployed tool treated TLS as the one and only integrity verification (rather than what it is, transport security).
The “sophisticated” attack reads like a tourist getting their wallet stolen from their beach chair while they went for a swim without it. Easy pickings, for someone to exploit unsophisticated engineering.
I love reading Dan Goodin, perhaps my favorite tech reporter of all time, but his article buries the lede:
…insufficient update verification controls that existed in older versions.
That’s the whole game, right there.
All the threat intelligence theater with chill names like “Chrysalis” and “Lotus Blossom,” with all the attribution to China-state actors getting “hands-on-keyboard” drama, obscures that this is a solved problem since at least 2005. Like twenty years ago Microsoft OEM’d an Israeli patching company and said oh shit we need to sign code, and that should have been the end of it, right?
Linux package managers have done cryptographic signature verification for many decades. Use of apt, yum, pacman, etc means you verify GPG signatures against pinned keys before execution. Done and dusted. This fix is older than many of the people involved in this disaster.
Why am I even writing about this?
The attack chain was to intercept update requests, redirect to a malicious binary, and let it execute. A checksum alone won’t save you here—if the attacker owns the distribution infrastructure, they serve bad binary and matching hash.
Self-consistent fraud.
The actual integrity breach fix is an asymmetric signing architecture. Key handling is the key. The developer signs a binary with a private key that never lives on update infrastructure. The client verifies against a public key pinned in the already-installed binary. Own the servers all you want—you can’t forge the signature without a properly hidden private key.
Here’s the part that should make you spit tea all over your screen. Or maybe that’s just me. They had signing. From Beaumont’s razor sharp analysis:
The downloads themselves are signed—however some earlier versions of Notepad++ used a self signed root cert, which is on Github.
Nice.
The lock was in the door and the key for it was… too. The integrity mechanism existed in form but not in function. A self-signed cert with the key published on GitHub means anyone who could redirect traffic could also forge valid signatures. That’s sad theater, an appearance of an integrity control when it doesn’t actually constrain anything.
Does content-addressable integrity need better marketing or something? I don’t get it. The transport layer is a layer for defense in depth, which someone confused with the core package integrity mechanism itself. And the actual signing layer, which should have been the real gate, was all hat no cattle.
Resources probably were allocated entirely into features and user growth. Someone went into transport layer security, yet didn’t bother to understand the limitations. The missing content integrity controls are a predictable catastrophic failure.
No regulators apparently required the basic cryptographic verification that actually prevents this. So the distribution of content never innovated on authenticity. Now we have to read about an integrity breach, a software developer scrambling to apologize and patch late what should have been there since twenty years ago.
Solved cryptographic engineering. Same pattern, always. You see it everywhere these days. A consent banner that doesn’t constrain data collection. An operations audit that doesn’t examine infrastructure. The signature that doesn’t verify authenticity.
The presence of a control, without regulations to ensure innovation around standards of care, can become dangerous cover for its absence.