The National Security Memorandum (NSM) on AI from the Biden administration caught my attention – but not for the reasons you might think. As I wrote with two co-authors in a recent Fordham Law Review paper on federalizing data privacy infrastructure, AI governance requires a comprehensive national security strategy. While many of my peers seem focused on legal and compliance implications, I see a more crucial technical gap that needs addressing: data architecture and sovereignty.
The Security Elephant in the Room
Let’s be frank – we’re building AI systems on shaky ground. The NSM talks about “safe, secure, and trustworthy AI,” but as any security professional knows, you want to avoid bolting security onto a system late in development. We need to build it into the foundation, and that’s where I believe the W3C Solid standard for data wallets plays a transformative role.
Currently, our AI systems are like fortified castles built on quicksand. We’re focusing on securing algorithms and models while leaving fundamental questions about underlying data ownership, access, and control largely unaddressed. Have you tried to safely apply your Claude project artifacts into ChatGPT, for example, reliably detecting any loss of integrity or confidentiality? While the NSM’s designation of the AI Safety Institute as the primary industry contact is promising, without a standardized data architecture, we’re setting ourselves up for a security nightmare.
Why Solid Matters for AI Security
For those unfamiliar, Solid is a set of protocols and standards developed by the W3C since 2016 that enables true data owner sovereignty with greater transparency in processing. Think of it as the difference between having hundreds of different keys for hundreds of different locks versus having a single, secure master key system that logs every use. That’s what we’re dealing with in AI security right now – a mess of proprietary systems that don’t talk to each other.
Let me break this down with a real-world scenario. Imagine you’re trying to secure an AI system that processes customer data across multiple cloud providers. Currently, you’re juggling different authentication systems, piecing together audit trails, and hoping your access controls are properly configured across all systems. It’s a nightmare that keeps many of us up at night, given how few if any security vendors are ready to offer real AI breach solutions.
With Solid’s standardized approach, this all changes. Instead of proprietary authentication systems, you get a unified standard for data ownership that works everywhere – like bringing OAuth-level standardization to AI data access. Your audit trails become comprehensive and automated, not pieced together from different systems. And perhaps most importantly, data stays compartmentalized with granular permissions, so a breach in one area doesn’t compromise everything. Solid offers a whole new level of safe AI efficacy because of natural data integrity enhancements through ownership, with far less risk of privacy loss.
What the NSM Gets Right (And Where It Falls Short)
Reading through the NSM, I found myself nodding along with its emphasis on “mechanisms for risk management, evaluations, accountability, and transparency.” These are exactly the principles we need. The document shows a solid understanding of supply chain security for AI chips and makes competitor intelligence collection a priority – both crucial for our national security posture.
But here’s where it falls short: it’s missing the architectural foundation. While it talks about securing AI systems, it doesn’t address the fundamental need for a standardized data architecture. It’s like trying to secure a city without agreeing on how to build the roads, walls and gates. We need more than just guidelines – we need a common framework for how data moves and who controls it.
A Strategic Roadmap for Security Leaders
If you’re a CISO reading this, you’re probably wondering how to actually implement these ideas. I’ve been working with security teams on this transition, and here’s what the most effective approach looks like: Start with a pilot project in a controlled environment – perhaps your internal AI development platform. Use this to demonstrate how standardized data wallets can simplify access control while improving security posture.
Over the next six months, focus on building out the infrastructure for standardized authentication and data governance. This isn’t just about technology – it’s about establishing new processes that align with how AI actually uses data. You’ll find that many of your current security headaches around data access and audit trails simply disappear when you have a proper foundation.
The long-term vision should be a complete transition to wallet-based architecture for AI systems. Yes, it’s ambitious, but it’s also necessary. The CISOs I’ve talked to who consider this path find that it significantly reduces their attack surface while making compliance much more straightforward.
The Path Forward
The NSM is a step in the right direction, but as security leaders, we need to push for more concrete technical standards. Solid provides a ready-made framework that could address many of the security and privacy concerns the NSM raises.
My recommendation? Start experimenting with Solid now as a technical solution that brings huge efficiencies. Don’t wait for more regulations and costly cleanup of technical debt. The organizations that build their AI systems on a Solid foundation of data sovereignty will be better positioned to meet present and future security and compliance requirements.
Bottom line: AI security isn’t just about protecting models and algorithms – it’s about ensuring the entire data lifecycle is secure, traceable, and under proper control. The NSM gives us the “should do”; Solid gives us the “how to”.