The Wall Street Journal has rushed to print a breathless report about the “growing security risks” of the LLM, painting a picture of unstoppable AI threats that companies must face “on their own” due to slow-moving government regulation in America.
Reading it, you’d think we were facing an unprecedented crisis with no solutions in sight and everyone has to be some kind of libertarian survivalist nut to run a business.
*sigh*
There’s a problem with this 100,000 foot view of the battle-fields some of us are slogging through every day down on earth: actual security practitioners have been solving the exact challenges for decades that they are talking about as theory.
Let’s break down the article’s claims versus reality:
Claim: “LLMs create new cybersecurity challenges” that traditional security can’t handle
Reality: Most LLM “attacks” fail against basic input validation, request filtering, and access controls that have existed since the 1970s. A security researcher could demonstrate LLM exploits, just as one example, are blocked by filtering product concepts like web application firewalls (WAF). Perhaps it’s time to change the acronym for this dog of an argument to Web Warnings Originating Out Of Outlandish Feudal Fears (WOOF WOOF). This is not to say wide open unfiltered unregulated systems aren’t going to fail catastrophically at safety, it’s actually agreeing with that as a completely suicidal notion. There was once, and I swear I’m not making this up, a person who decided they would eat at the lowest online-rated restaurants to see if they could personally validate low ratings… and almost immediately they ended in a hospital. Could we handle proving that radio and TV Nazism is newspaper Nazism? You be the judge.
Nobody should be surprised when a long time Nazi promoter… does what he always has done. Nothing about that Nazi salute is news to anyone paying attention for the last decade to Elon Musk saying lots of Nazi stuff. To the WSJ I guess Nazis salutes are confusing and new simply because… they come out of the technology fraud known as Tesla.
Claim: Companies must “cope with risks on their own” without government help
Reality: The ISO 42001:2023 framework years ago published standards for AI management system (AIMS) related to ethical considerations and transparency. NIST AI Risk Management Framework (AI RMF) is also a thing, and who can forget last year’s EU AI Act? Major cloud providers operating in a global market (e.g. GCP Vertex, AWS Bedrock and Azure… haha, who am I kidding, Microsoft fired their entire LLM security team) have LLM-specific security controls documented because of global regulations (and because regulation is the true mother of innovation). These aren’t experimental future concepts, they’re production-ready and widely deployed to meet customer demand for LLMs that aren’t an obvious dumpster fire by design.
And even more to the point, today we have trusted execution environment (TEE) providers delivering encrypted enclave LLMs as a service… and while that sentence wouldn’t make any sense to the WSJ, it proves how reality is far, far away from the fairy-tales of loud OpenAI monarchs trying to scare the square pegs of society into an artificially round “eating the world” hole.
Om nom nom again? No thanks, I think we’ve had enough “golden” fascist tech vision for now.
Come here tasty chickens my very dangerous coop can set you free, the VC fox says, pointing to his LLM registration page that looks suspiciously like a 1930s IBM counting machine setup by Hitler’s government.
Claim: The “unstructured and conversational nature” of LLMs creates unprecedented risks
Reality: This one really chaps my hide, as the former head of security for one of the most successful NoSQL products in history. We’ve been securing unstructured data and conversational interfaces for years. I’ve personally spearheaded and delivered field-level encryption and I’m working on even more powerful open standards. Ask any bank managing any of their chat history risks or any healthcare provider handling free-text medical records including transcription systems. These same human language principles in tech, applied for decades, apply to LLMs.
The article quotes exactly zero working security engineers. Instead, we get predictions from a former politician and a CEO selling LLM security products. It’s like writing about bridge safety but only interviewing people selling car insurance.
Here’s what actual practitioners are doing right now to secure LLMs:
- Rate limiting and anomaly detection catch repetitive probe attempts and unusual interaction patterns – the same way we’ve protected APIs for years. An attacker trying thousands of prompt variations to find a weakness looks exactly like traditional brute force that we already detect.
- OAuth and RBAC don’t care if they’re protecting an LLM or a legacy database – they enforce who can access what. Proper identity management and authorization scoping means even a compromised model can only access data it’s explicitly granted. We’ve been doing this since SAML days.
- Input validation isn’t rocket science – we scan for known malicious patterns, enforce structural rules, and maintain blocked token lists. Yes, prompts are more complex than SQL queries, but the same principles of taint tracking and context validation still apply. Output control can look through anything that slips, using the content filtering developed in data loss detection (patterns).
- Data governance isn’t new either – proven classification systems already manage sensitive data through established group boundaries and organizational domains. Have you seen SolidProject.org by the man who invented the Web? Adding LLM interactions to existing monitoring frameworks just means updating taxonomies and access policies to respect long-standing natural organizational data boundaries and user/group trust relationships. The same principles of access grants, control and clear data sovereignty that have worked for decades apply here, yet again.
These aren’t theoretical – they’re rather pedestrian proven security controls that work today despite the bullhorn-holding soap-box CEOs trying to sell armored Cybertrucks that in reality crash and kill the occupants at a rate 17X worse than a Ford Pinto. Seriously, the “extreme survival” truck pitch of the “cyber” charlatan at Tesla has produced the least survivable thing in history. Exciting headlines about AI apocalypse drive the wrong perceptions and definitely foreshadow the fantastical failures of 10-gallon hat wearing snake-oil salesman of Texas.
The WSJ article, when you really think about it, brings to mind mistakes being made in security reporting since the 15th century panic about crossbows democratizing warfare.
Yes, crossbows at first glance wielded by unskilled over-paid kids serving an unpopular monarch were powerful weapons that could radically shift battlefield dynamics. Yet to the expert security analyst (career knight responsible for defense of local populations he served faithfully) the practical limitations (slow reload times, maintenance requirements, defensive training) meant technology had a supplement effect rather than replacement to existing military tactics. A “Big Balls” teenager who shot his load and then sat on the ground without a shield struggling to rewind the crossbow presented easy pickings, thus wounded or killed with haste (dozen rounds per minute fired by a trained archer versus no more than 2 per minute for a crossbow skid). The same is true for LLM skids as they don’t “Grok” security considerations by re-introducing old vulnerabilities, none of which magically get lost on experts who grasp fundamental security principles.
When journalists publish theater scripts for entertainment value instead of practical analysis, they do our security industry a disservice. Companies need accurate information about real risks and proven solutions, not hand-waving vague warnings and appeals to fear that pump up anti-expert mysticism.
The next time you read an article about “unprecedented” AI security threats, ask yourself: are they describing novel technical vulnerabilities, or just presenting tired challenges through new buzzwords? Usually, it’s the latter. The DOGEan LLM horse gave a bunch of immoral teenagers direct access to federal data as if nobody remembered why condoms are called Trojans.
And remember, when someone tells you traditional security can’t handle LLM threats, they’re probably rocking up with a proprietary closed solution to a problem that repurposed controls or open standards could solve.
Stay salty**, America.
** Just as introducing salt ions disrupts water’s natural distributed hydrogen bonding network, attempts by a fear-mongering WSJ to impose centralized security controls can weaken the organic, interconnected security practices that have evolved through decades of practical experience. The following diagram illustrates how strong distributed networks – water’s tetrahedral hydrogen bonds – become compromised when forced to reorient around centralized authorities such as Na+ and Cl- ions, a scientific pattern observable whether in molecular chemistry or information security.

Imagine you have a room full of people who are really good at passing messages to each other through a well-organized democratic distributed Internet. That’s like pure water, managing electrical effects efficiently through its hydrogen bond network. Now imagine some very loud, demanding people (DOGE, or salt ions) enter and demand everyone switch attention instead to their obnoxious rants about efficiency. The network rapidly degrades in efficiency as the DOGEans disrupt all the natural communication networks, while falsely claiming they’re increasing efficiency by centralizing everything. Do we understand this for LLM security and the current massive DOGE breach of the federal government? Yes we do. Does the WSJ? No it does not. Alarmist snake-oil based centralized control – whether through ions or tech platforms run by DOGEans – significantly increase vulnerabilities and catastrophic breach risks.