Ukraine Trains Run on Time Despite Russian Attacks on Ticketing Systems

The recent Russian cyberattack on Ukraine’s railway infrastructure demonstrates a fundamental security principle: distributed, open systems consistently prove more resilient than centralized, closed ones. This principle, though counterintuitive for some, has profound implications for how we should design critical infrastructure in an era of increasing cyber threats.

Transportation efficiency studies across Europe provide compelling evidence for this principle in practice:

  1. London and Paris focus on constant access proofs through physical and digital barriers, creating an easily broken “prevention” system overly-dependent on vendors building ever more expensive micro-movement taxation systems.
  2. Berlin’s model accepts “open door” access for system resilience and maximized throughput for low cost and high gain, based on “detection and enforcement” approaches that prioritize operational continuity.

The results speak for themselves:

  • Berlin’s system moves approximately 20-25% more passengers per hour during peak times due to fewer bottlenecks.
  • Berlin’s infrastructure costs are estimated to be 30-40% lower due to reduced need for physical barriers and monitoring systems.
  • The ROI is compelling from both economic and security perspectives.

Berlin’s distributed system principles mirror exactly what helped Ukraine’s railway system withstand a recent cyberattack, as reported by Reuters:

Blaming the cyberattack on the “enemy”, shorthand usually used by Kyiv to mean Russia, officials said rail travel had not been affected but that work was still under way to restore the online ticketing system more than 24 hours after the hack. An outage was first reported on Sunday when the rail company notified passengers about a failure in its IT system and told them to buy tickets on-site or on trains. “The latest attack was very systemic, unusual and multi-level,” rail company Ukrzaliznytsia wrote on the Telegram app.

By maintaining operational flow while ticketing availability was compromised, they exhibited resilience through distributed and redundant systems:

Oleksandr Pertsovskyi, Ukrzaliznytsia’s board chairman, said on national television that the company had handled the fallout from the attack well. “Operational traffic did not stop for a single moment. The enemy attack was aimed at stopping trains, but we quickly switched to backup systems.”

This successful response demonstrates that if they issued high-volume, low-cost monthly tickets, these attacks would be even less effective. This pattern follows documented precedents of attackers focusing on authentication systems—from the 2016 Ukrainian Power Grid credential theft to the alleged 2025 Oracle Cloud Access Manager compromise.

The Russian approach to the Ukrainian railway reveals tactical limitations that explain its ineffectiveness. Their persistent focus on centralized authentication points rather than adapting to counter distributed security models represents a strategic vulnerability, similar to deploying conventional forces against highly mobile, asymmetric defenders.

Russia’s unchanging focus remains on:

  1. Seeing freedom of movement as a function of individuals requiring tickets through centrally-controlled checkpoints rather than actual trusted privilege and distributed authority
  2. Believing a psychological impact comes from impatience in a perceived service degradation, rather than actual kinetic harms

This approach parallels historical military failures against asymmetric opponents, similar to how Ukrainian mobile units have proven effective against Russian armored columns—echoing the British information warfare methods documented in their WWI Gaza/Beersheba campaign.

What makes the Russian attack significant isn’t the technical sophistication but how it puts London, Paris, and NYC on notice for having similar strategic weaknesses—the more authoritarian the model of civilian movement, the more vulnerable to attacks by foreign authoritarian adversaries.

The successful Ukrainian response offers three critical lessons for anyone designing data storage and identity management systems:

  • Distributed Resilience: Operations continued despite authentication compromise
  • Manual Fallbacks: Ticket issuance shifted to in-person
  • Open Standards: Less dependency on proprietary authentication

This pattern of breaching systems through authentication vulnerabilities reveals predictable tactics that demand a new approach. The days of “lockout after three tries” and other simplistic Microsoft “Domain” approaches to security are clearly obsolete in today’s identity threat landscape.

Evolution of defense requires a fundamental return to first-principles of security architecture, moving away from centralized prevention toward distributed detection and resilience. Authentication systems should be designed with the assumption of compromise and logical resilience rather than the illusion of impenetrability—similar to how the 1970s “Inter-net” was designed with open protocols that could survive targeted Soviet threats.

This is the reality of modern information technology operations: authentication isn’t just another service to protect—it’s a primary battlefield that demands openness and interoperability as survival mechanisms.

Centralized systems built without distributed concepts are like a modern bridge made of poorly guarded chains instead of superior engineering in braided inexpensive wires, where any single expensive link would cause catastrophic system failure.

Web 3.0 Requires Data Integrity – Communications of the ACM

If you’ve ever taken a computer security class, you’ve probably learned about the three legs of computer security—confidentiality, integrity, and availability—known as the CIA triad. When we talk about a system being secure, that’s what we’re referring to. All are important, but to different degrees in different contexts. In a world populated by artificial intelligence (AI) systems and artificial intelligent agents, integrity will be paramount.

What is data integrity? It’s ensuring that no one can modify data—that’s the security angle—but it’s much more than that. It encompasses accuracy, completeness, and quality of data—all over both time and space. It’s preventing accidental data loss; the “undo” button is a primitive integrity measure. It’s also making sure that data is accurate when it’s collected—that it comes from a trustworthy source, that nothing important is missing, and that it doesn’t change as it moves from format to format. The ability to restart your computer is another integrity measure.

The CIA triad has evolved with the Internet. The first iteration of the Web—Web 1.0 of the 1990s and early 2000s—prioritized availability. This era saw organizations and individuals rush to digitize their content, creating what has become an unprecedented repository of human knowledge. Organizations worldwide established their digital presence, leading to massive digitization projects where quantity took precedence over quality. The emphasis on making information available overshadowed other concerns.

As Web technologies matured, the focus shifted to protecting the vast amounts of data flowing through online systems. This is Web 2.0: the Internet of today. Interactive features and user-generated content transformed the Web from a read-only medium to a participatory platform. The increase in personal data, and the emergence of interactive platforms for e-commerce, social media, and online everything demanded both data protection and user privacy. Confidentiality became paramount.

We stand at the threshold of a new Web paradigm: Web 3.0. This is a distributed, decentralized, intelligent Web. Peer-to-peer social-networking systems promise to break the tech monopolies’ control on how we interact with each other. Tim Berners-Lee’s open W3C protocol, Solid, represents a fundamental shift in how we think about data ownership and control. A future filled with AI agents requires verifiable, trustworthy personal data and computation. In this world, data integrity takes center stage.

For example, the 5G communications revolution isn’t just about faster access to videos; it’s about Internet-connected things talking to other Internet-connected things without our intervention. Without data integrity, for example, there’s no real-time car-to-car communications about road movements and conditions. There’s no drone swarm coordination, smart power grid, or reliable mesh networking. And there’s no way to securely empower AI agents.

In particular, AI systems require robust integrity controls because of how they process data. This means technical controls to ensure data is accurate, that its meaning is preserved as it is processed, that it produces reliable results, and that humans can reliably alter it when it’s wrong. Just as a scientific instrument must be calibrated to measure reality accurately, AI systems need integrity controls that preserve the connection between their data and ground truth.

This goes beyond preventing data tampering. It means building systems that maintain verifiable chains of trust between their inputs, processing, and outputs, so humans can understand and validate what the AI is doing. AI systems need clean, consistent, and verifiable control processes to learn and make decisions effectively. Without this foundation of verifiable truth, AI systems risk becoming a series of opaque boxes.

Recent history provides many sobering examples of integrity failures that naturally undermine public trust in AI systems. Machine-learning (ML) models trained without thought on expansive datasets have produced predictably biased results in hiring systems. Autonomous vehicles with incorrect data have made incorrect—and fatal—decisions. Medical diagnosis systems have given flawed recommendations without being able to explain themselves. A lack of integrity controls undermines AI systems and harms people who depend on them.

They also highlight how AI integrity failures can manifest at multiple levels of system operation. At the training level, data may be subtly corrupted or biased even before model development begins. At the model level, mathematical foundations and training processes can introduce new integrity issues even with clean data. During execution, environmental changes and runtime modifications can corrupt previously valid models. And at the output level, the challenge of verifying AI-generated content and tracking it through system chains creates new integrity concerns. Each level compounds the challenges of the ones before it, ultimately manifesting in human costs, such as reinforced biases and diminished agency.

Think of it like protecting a house. You don’t just lock a door; you also use safe concrete foundations, sturdy framing, a durable roof, secure double-pane windows, and maybe motion-sensor cameras. Similarly, we need digital security at every layer to ensure the whole system can be trusted.

This layered approach to understanding security becomes increasingly critical as AI systems grow in complexity and autonomy, particularly with large language models (LLMs) and deep-learning systems making high-stakes decisions. We need to verify the integrity of each layer when building and deploying digital systems that impact human lives and societal outcomes.

At the foundation level, bits are stored in computer hardware. This represents the most basic encoding of our data, model weights, and computational instructions. The next layer up is the file system architecture: the way those binary sequences are organized into structured files and directories that a computer can efficiently access and process. In AI systems, this includes how we store and organize training data, model checkpoints, and hyperparameter configurations.

On top of that are the application layers—the programs and frameworks, such as PyTorch and TensorFlow, that allow us to train models, process data, and generate outputs. This layer handles the complex mathematics of neural networks, gradient descent, and other ML operations.

Finally, at the user-interface level, we have visualization and interaction systems—what humans actually see and engage with. For AI systems, this could be everything from confidence scores and prediction probabilities to generated text and images or autonomous robot movements.

Why does this layered perspective matter? Vulnerabilities and integrity issues can manifest at any level, so understanding these layers helps security experts and AI researchers perform comprehensive threat modeling. This enables the implementation of defense-in-depth strategies—from cryptographic verification of training data to robust model architectures to interpretable outputs. This multi-layered security approach becomes especially crucial as AI systems take on more autonomous decision-making roles in critical domains such as healthcare, finance, and public safety. We must ensure integrity and reliability at every level of the stack.

The risks of deploying AI without proper integrity control measures are severe and often underappreciated. When AI systems operate without sufficient security measures to handle corrupted or manipulated data, they can produce subtly flawed outputs that appear valid on the surface. The failures can cascade through interconnected systems, amplifying errors and biases. Without proper integrity controls, an AI system might train on polluted data, make decisions based on misleading assumptions, or have outputs altered without detection. The results of this can range from degraded performance to catastrophic failures.

We see four areas where integrity is paramount in this Web 3.0 world. The first is granular access, which allows users and organizations to maintain precise control over who can access and modify what information and for what purposes. The second is authentication—much more nuanced than the simple “Who are you?” authentication mechanisms of today—which ensures that data access is properly verified and authorized at every step. The third is transparent data ownership, which allows data owners to know when and how their data is used and creates an auditable trail of data providence. Finally, the fourth is access standardization: common interfaces and protocols that enable consistent data access while maintaining security.

Luckily, we’re not starting from scratch. There are open W3C protocols that address some of this: decentralized identifiers for verifiable digital identity, the verifiable credentials data model for expressing digital credentials, ActivityPub for decentralized social networking (that’s what Mastodon uses), Solid for distributed data storage and retrieval, and WebAuthn for strong authentication standards. By providing standardized ways to verify data provenance and maintain data integrity throughout its lifecycle, Web 3.0 creates the trusted environment that AI systems require to operate reliably. This architectural leap for integrity control in the hands of users helps ensure that data remains trustworthy from generation and collection through processing and storage.

Integrity is essential to trust, on both technical and human levels. Looking forward, integrity controls will fundamentally shape AI development by moving from optional features to core architectural requirements, much as SSL certificates evolved from a banking luxury to a baseline expectation for any Web service.

Web 3.0 protocols can build integrity controls into their foundation, creating a more reliable infrastructure for AI systems. Today, we take availability for granted; anything less than 100% uptime for critical websites is intolerable. In the future, we will need the same assurances for integrity. Success will require following practical guidelines for maintaining data integrity throughout the AI lifecycle—from data collection through model training and finally to deployment, use, and evolution. These guidelines will address not just technical controls but also governance structures and human oversight, similar to how privacy policies evolved from legal boilerplate into comprehensive frameworks for data stewardship. Common standards and protocols, developed through industry collaboration and regulatory frameworks, will ensure consistent integrity controls across different AI systems and applications.

Just as the HTTPS protocol created a foundation for trusted e-commerce, it’s time for new integrity-focused standards to enable the trusted AI services of tomorrow.

This essay was written with Bruce Schneier, and originally appeared in Communications of the Association for Computing Machinery (ACM), “the world’s largest educational and scientific computing society”.

Oracle Cloud Breach Denial Falls Apart: New Evidence Lands Hard

When a major cloud provider faces breach allegations, the response method and style of the people in charge matters almost more than the details of the technology. That’s exactly what’s happening with Oracle Cloud right now, as paper-thin narratives are being blown apart related to the potentially massive security incident affecting thousands of enterprise customers.

We all saw last week when “rose87168” posted on BreachForums that they had compromised Oracle Cloud’s login servers and exfiltrated approximately 6 million records containing sensitive authentication data. This includes Java KeyStore files (containing security certificates and encryption keys), encrypted SSO and LDAP passwords, and other authentication-related information.

Oracle’s response was… both instant and forceful:

There has been no breach of Oracle Cloud. The published credentials are not for the Oracle Cloud. No Oracle Cloud customers experienced a breach or lost any data.

The instant confidence seemed too abrupt to be reliable. Sound familiar?

Hegseth is the first Trump official to deny [the breach]

Indeed, just like how The Atlantic has exposed national security level breach facts a cybersecurity firm CloudSEK tells us already a very different story than Oracle. Evidence published suggests the breach not only is completely real but will stand as one of the most significant supply chain compromises.

So what do we actually know, and how plausible is this breach? Let’s look at some evidence:

  1. Proof of Access: The threat actor uploaded a simple text file to login.us2.oraclecloud.com containing their email address, which was captured by the Internet Archive’s Wayback Machine. This suggests they had write access to an Oracle Cloud server.
  2. Confirmation of Prod Environment: CloudSEK has presented documentation showing that the compromised server wasn’t a test environment but a legitimate production SSO endpoint used by real customers for OAuth2 authentication and token generation.
  3. Validation of Real Customers: By cross-referencing the leaked domain list with publicly available GitHub repositories, CloudSEK identified multiple genuine Oracle customers who had configurations referencing the affected login server. These included organizations like Rapid4Cloud (an Oracle Gold Partner) and several other enterprise companies.
  4. Applicability of 2021 Vulnerability: The Register noted the server appeared to be running Oracle Fusion Middleware 11G, which may have been vulnerable to CVE-2021-35587, a critical flaw in Oracle Access Manager. This vulnerability has public exploit code available and matches the attack vector described.
  5. Dump of Sample Data: The threat actor has since released a 10,000-line sample containing data from over 1,500 organizations, with evidence suggesting access to development, test, and production environments.

Despite this level of investigation producing reliable evidence, Oracle positioned their response as no breach possible. In cybersecurity, such absolute denials sometimes occur when organizations believe a compromised system is not directly connected to customer data, or don’t have a complete picture of their own systems. Perhaps more to the point Oracle may be defining “breach” narrowly (for example, distinguishing between credential theft and actual data exfiltration) and want to rewrite the issue as something different than what’s being claimed by the security reporter.

Without more detailed explanation from Oracle, it’s difficult to reconcile such an early categorical denial with the flow of specific technical evidence that is starting to be revealed.

The breach claims sure seem increasingly plausible, despite immediate denials, and for several good reasons. First, the breach vector follows known weakness in cloud architecture (as outlined in our 2012 cloud security book). Similar SSO/authentication system breaches have been known to come from unpatched vulnerabilities. Second, we have specific server names and paths that match legitimate Oracle Cloud infrastructure, and a second investigation that provided documentation analysis, domain verification, and sample data examination. Third, speaking of documentation, the volume and structure of the leaked data could be extremely difficult to fabricate convincingly. Not saying AI wouldn’t do a job today, but in the past integrity of the data dump has been a useful proof-point.

What remains less clear is the disputed impact to Oracle and its customers. Authentication-related information was allegedly stolen, yet there’s no evidence yet of compromised downstream customer environments affected. Nonetheless, this incident undoubtedly places Oracle’s security credibility under scrutiny in several remarkable ways.

The stark contrast between Oracle’s blanket denial and the detailed evidence presented by CloudSEK creates a troubling credibility gap. In security incidents, transparency builds trust, even when the news is bad. Oracle was so quick to reject the claims, it opened a huge chasm of credibility for other researchers to step into and assert trust.

And of course if we’re talking about a breach from Oracle’s CVSS 9.8 (CRITICAL) CVE-2021-35587 (e.g. published in 2021), it raises concerns about Oracle’s basic patch management and vulnerability remediation practices for its own cloud infrastructure. A critical patch should have been deployed in less than 72 hours from announcement. Last time I checked my watch the year 2025 was more than a few hours later than 2021.

Incidentally, no pun intended, this detail tracks with my own experiences behind the scenes with Oracle (and inside the Salesforce infrastructure, for that matter) where patches sometimes lag so far behind industry baselines as to beg the question of why and how some “big name” Silicon Valley security executives get so wealthy so fast while seemingly asleep at the wheel… rhetorical, I know.

Apparently some organizations potentially affected by this breach will be learning about it through third parties rather than directly from Oracle, which goes back to the problem of establishing a voice of trust. Oracle likes to position itself as a security leader, dubiously emphasizing the security advantages of its cloud offerings over competitors. This incident of course challenges whether that narrative is just lipstick on the pig.

All that said, it’s worth noting that if Oracle has legitimate reasons to believe no breach occurred, making any kind of premature acknowledgment also is unnecessary. The company may be conducting a thorough investigation before providing more details. However, that should be conveyed as such, with a statement of investigation rather than a categorical denial.

What now?

If you’re an Oracle Cloud customer, here’s what:

  1. Rotate Creds: Immediately rotate all SSO, LDAP, and associated credentials.
  2. Confirm MFA: Ensure strong multi-factor authentication is enforced across all cloud access points.
  3. Tune-up Monitoring: Increase alerting on suspicious authentication attempts or lateral movements.
  4. Independent Assessment: Consider engaging external experts to impartially and evaluate potential exposure with novel methods.
  5. Document, document: Maintain detailed records of your response actions in case this becomes relevant to compliance requirements and claims against Oracle.

The fundamental challenge in cloud security isn’t new, and Oracle should be handling this in a way that doesn’t paint an even bigger target on their customers. When using third-party infrastructure, organizations are inherently dependent on security practices and transparency of the management running that infrastructure. This debate also demonstrates the value of dedicated and independent security researchers who more reliably verify claims and provide additional context during incidents.

As this story continues to develop, the most important outcome would be increased clarity—whether that confirms a breach occurred or explains why the compelling evidence presented doesn’t indicate an actual security compromise. Either way, we’re all watching closely for lessons that can improve future security.

Titanic Chernobyl: the White House Unlearns National Security with Signal Starlink

We’ve witnessed what can only be described as how NOT to handle sensitive government technology and communications.

The installation of Starlink at the White House and the sloppy inclusion of a journalist in Signal chat for military strike planning represent a dangerous rejection of established safety protocols by those who apparently believe they are above the law and therefore untouchable.

Chernobyl Brain: Rules Are For Others

The Chernobyl disaster offers a powerful analogy for our current situation. What made that catastrophe so devastating wasn’t merely technical failure, but the Soviet organizational culture that enabled it: the casual bypassing of safety protocols, the dismissal of expert warnings, and the reckless improvisation during a sensitive procedure, all stemming from a Hegseth-like belief that catastrophic consequences simply wouldn’t apply to them.

When national security officials coordinate military strikes via a consumer device with a consumer OS and a consumer app on a consumer network, we’re witnessing a similar disregard for established protocols. The Germans recently learned this, as if anyone needs to be reminded the Russians are always listening to everything.

Just as Chernobyl operators manually overrode safety systems with a “we know better” attitude, today’s officials override digital safeguards by moving classified communications to platforms never designed for such use.

The most chilling parallel? The apparent belief that they are exempt from consequences. As Jeffrey Goldberg’s shocking report revealed, defense officials shared operational details, target information, and weapons packages WITHOUT OPSEC, likely and knowingly violating the Espionage Act in the process. When confronted about this breach, the official response demonstrated true Chernobyl Brain: “The thread is a demonstration of the deep and thoughtful policy coordination between senior officials. The ongoing success of the Houthi operation demonstrates that there were no threats to troops or national security.”

Uh, what?

This response echoes the initial Chernobyl reaction: nothing to see here during symptoms of meltdown; the system is still functioning; no real harm done. It reflects a worldview where security breaches are inconsequential as long as nothing immediately explodes, a very dangerous miscalculation of accumulating risk.

Titanic Legs: Unsinkable Hubris

The Titanic’s tragedy stemmed largely from a belief in its own invulnerability. Its operators ignored iceberg warnings and maintained speed in dangerous conditions, confident in their “unsinkable” vessel. The casualties were considered an acceptable risk – until they weren’t.

This same hubris manifests in the White House’s technology decisions. The casual implementation of Starlink, described by experts as “shadow IT, creating a network to bypass existing controls” shows misplaced confidence that borders on deadly arrogance. Even more telling is the bizarre implementation: Starlink dishes installed miles away from the White House, with the connection routed back through existing (tapped) fiber lines.

Why take this approach? Because they can create exposure and weakness for the Russians to exploit. Because consequences are flaunted. Because the rules that govern everyone, including federal records laws, classified communication protocols, and basic security principles, are treated as inconvenient obstacles to be challenged and ignored rather than essential safeguards.

When pressed about the inadequate Starlink safety a White House source dismissively explained that “the old was trash” as if an outrage of personal convenience justifies creating national security vulnerabilities. This mirrors the Titanic’s rejection of caution in favor of speed… right to the bottom of the ocean.

Consequences For Thee, Not For Me

What makes these security breaches particularly troubling is the clear double standards at play. The administration that campaigned on “lock her up” regarding weak communication protocols now coordinates military strikes via weak communication protocols. The same officials who emphasize borders for safety, routinely remove all the borders in technology.

This goes beyond carelessness because backed by the belief that consequences are for others. When the White House spokesperson defends the Starlink implementation by saying, “Just like the [insert any random name] did on numerous occasions, the White House is working to improve WiFi connectivity on the complex,” the message is clear: words have no meaning anymore because rules are no longer for those in power.

Improve?

The installation of parallel wireless systems creates security blind spots, monitoring gaps, and potential backdoors into sensitive networks. The use of commercial messaging apps on weak infrastructure for classified communications exposes operational details to potential interception. And most notable of all we have absolute proof the White House accepts lip service from Hegseth, when he’s obviously in breach of laws. Yet the attitude persists: we are untouchable; the damage to Americans won’t affect us when we move like Snowden to an apartment in Moscow.

From Recklessness to Disaster

Both Chernobyl and the Titanic demonstrate how quickly perceived invulnerability transforms into catastrophe. In both cases, the disaster wasn’t a bolt from the blue – it was the logical conclusion of accumulated shortcuts, ignored warnings, and systemic arrogance.

When officials treat national security infrastructure like a pig pen where established rules don’t apply to their mud slinging, they aren’t simply being careless, they’re setting the stage for predictable disaster. The accidental inclusion of a journalist in military planning didn’t lead to immediate catastrophe, thanks to the professionalism of that journalist, but it revealed a system where such accidents are not only possible but probable.

As one security expert noted regarding the Starlink implementation: “This is extra stupid to go satellite to fiber to actual site.” This isn’t the language of political disagreement, it’s the exasperation of true professionals watching rank muddy amateurs dismantle critical safeguards because they believe themselves immune to consequences.

Inevitable Reckoning

History teaches us that no one is truly untouchable, no matter how much they believe otherwise. The Titanic’s “unsinkable” reputation didn’t prevent it from sinking. Chernobyl’s operators’ confidence didn’t prevent catastrophic fallout.

The current approach to national security technology in bypassing established systems, ignoring expert warnings, and treating classified information casually, isn’t sustainable for another minute. These aren’t merely political choices; they’re fundamental security vulnerabilities that accumulate and worsen with time. Ask me about quantum threats in Signal.

When the inevitable breach occurs, when classified information is compromised (if not already), when military operations are exposed, when critical systems are penetrated, the consequences won’t be limited to those who created the vulnerabilities. Like Chernobyl’s radiation or the Titanic’s icy waters, the damage will spread far beyond those responsible.

Until the American people understand that no one is truly untouchable when it comes to security fundamentals, we remain on a collision course with consequences that no amount of privilege or power can deflect.