Category Archives: Security

Oracle Cloud Breach Denial Falls Apart: New Evidence Lands Hard

When a major cloud provider faces breach allegations, the response method and style of the people in charge matters almost more than the details of the technology. That’s exactly what’s happening with Oracle Cloud right now, as paper-thin narratives are being blown apart related to the potentially massive security incident affecting thousands of enterprise customers.

We all saw last week when “rose87168” posted on BreachForums that they had compromised Oracle Cloud’s login servers and exfiltrated approximately 6 million records containing sensitive authentication data. This includes Java KeyStore files (containing security certificates and encryption keys), encrypted SSO and LDAP passwords, and other authentication-related information.

Oracle’s response was… both instant and forceful:

There has been no breach of Oracle Cloud. The published credentials are not for the Oracle Cloud. No Oracle Cloud customers experienced a breach or lost any data.

The instant confidence seemed too abrupt to be reliable. Sound familiar?

Hegseth is the first Trump official to deny [the breach]

Indeed, just like how The Atlantic has exposed national security level breach facts a cybersecurity firm CloudSEK tells us already a very different story than Oracle. Evidence published suggests the breach not only is completely real but will stand as one of the most significant supply chain compromises.

So what do we actually know, and how plausible is this breach? Let’s look at some evidence:

  1. Proof of Access: The threat actor uploaded a simple text file to login.us2.oraclecloud.com containing their email address, which was captured by the Internet Archive’s Wayback Machine. This suggests they had write access to an Oracle Cloud server.
  2. Confirmation of Prod Environment: CloudSEK has presented documentation showing that the compromised server wasn’t a test environment but a legitimate production SSO endpoint used by real customers for OAuth2 authentication and token generation.
  3. Validation of Real Customers: By cross-referencing the leaked domain list with publicly available GitHub repositories, CloudSEK identified multiple genuine Oracle customers who had configurations referencing the affected login server. These included organizations like Rapid4Cloud (an Oracle Gold Partner) and several other enterprise companies.
  4. Applicability of 2021 Vulnerability: The Register noted the server appeared to be running Oracle Fusion Middleware 11G, which may have been vulnerable to CVE-2021-35587, a critical flaw in Oracle Access Manager. This vulnerability has public exploit code available and matches the attack vector described.
  5. Dump of Sample Data: The threat actor has since released a 10,000-line sample containing data from over 1,500 organizations, with evidence suggesting access to development, test, and production environments.

Despite this level of investigation producing reliable evidence, Oracle positioned their response as no breach possible. In cybersecurity, such absolute denials sometimes occur when organizations believe a compromised system is not directly connected to customer data, or don’t have a complete picture of their own systems. Perhaps more to the point Oracle may be defining “breach” narrowly (for example, distinguishing between credential theft and actual data exfiltration) and want to rewrite the issue as something different than what’s being claimed by the security reporter.

Without more detailed explanation from Oracle, it’s difficult to reconcile such an early categorical denial with the flow of specific technical evidence that is starting to be revealed.

The breach claims sure seem increasingly plausible, despite immediate denials, and for several good reasons. First, the breach vector follows known weakness in cloud architecture (as outlined in our 2012 cloud security book). Similar SSO/authentication system breaches have been known to come from unpatched vulnerabilities. Second, we have specific server names and paths that match legitimate Oracle Cloud infrastructure, and a second investigation that provided documentation analysis, domain verification, and sample data examination. Third, speaking of documentation, the volume and structure of the leaked data could be extremely difficult to fabricate convincingly. Not saying AI wouldn’t do a job today, but in the past integrity of the data dump has been a useful proof-point.

What remains less clear is the disputed impact to Oracle and its customers. Authentication-related information was allegedly stolen, yet there’s no evidence yet of compromised downstream customer environments affected. Nonetheless, this incident undoubtedly places Oracle’s security credibility under scrutiny in several remarkable ways.

The stark contrast between Oracle’s blanket denial and the detailed evidence presented by CloudSEK creates a troubling credibility gap. In security incidents, transparency builds trust, even when the news is bad. Oracle was so quick to reject the claims, it opened a huge chasm of credibility for other researchers to step into and assert trust.

And of course if we’re talking about a breach from Oracle’s CVSS 9.8 (CRITICAL) CVE-2021-35587 (e.g. published in 2021), it raises concerns about Oracle’s basic patch management and vulnerability remediation practices for its own cloud infrastructure. A critical patch should have been deployed in less than 72 hours from announcement. Last time I checked my watch the year 2025 was more than a few hours later than 2021.

Incidentally, no pun intended, this detail tracks with my own experiences behind the scenes with Oracle (and inside the Salesforce infrastructure, for that matter) where patches sometimes lag so far behind industry baselines as to beg the question of why and how some “big name” Silicon Valley security executives get so wealthy so fast while seemingly asleep at the wheel… rhetorical, I know.

Apparently some organizations potentially affected by this breach will be learning about it through third parties rather than directly from Oracle, which goes back to the problem of establishing a voice of trust. Oracle likes to position itself as a security leader, dubiously emphasizing the security advantages of its cloud offerings over competitors. This incident of course challenges whether that narrative is just lipstick on the pig.

All that said, it’s worth noting that if Oracle has legitimate reasons to believe no breach occurred, making any kind of premature acknowledgment also is unnecessary. The company may be conducting a thorough investigation before providing more details. However, that should be conveyed as such, with a statement of investigation rather than a categorical denial.

What now?

If you’re an Oracle Cloud customer, here’s what:

  1. Rotate Creds: Immediately rotate all SSO, LDAP, and associated credentials.
  2. Confirm MFA: Ensure strong multi-factor authentication is enforced across all cloud access points.
  3. Tune-up Monitoring: Increase alerting on suspicious authentication attempts or lateral movements.
  4. Independent Assessment: Consider engaging external experts to impartially and evaluate potential exposure with novel methods.
  5. Document, document: Maintain detailed records of your response actions in case this becomes relevant to compliance requirements and claims against Oracle.

The fundamental challenge in cloud security isn’t new, and Oracle should be handling this in a way that doesn’t paint an even bigger target on their customers. When using third-party infrastructure, organizations are inherently dependent on security practices and transparency of the management running that infrastructure. This debate also demonstrates the value of dedicated and independent security researchers who more reliably verify claims and provide additional context during incidents.

As this story continues to develop, the most important outcome would be increased clarity—whether that confirms a breach occurred or explains why the compelling evidence presented doesn’t indicate an actual security compromise. Either way, we’re all watching closely for lessons that can improve future security.

Titanic Chernobyl: the White House Unlearns National Security with Signal Starlink

We’ve witnessed what can only be described as how NOT to handle sensitive government technology and communications.

The installation of Starlink at the White House and the sloppy inclusion of a journalist in Signal chat for military strike planning represent a dangerous rejection of established safety protocols by those who apparently believe they are above the law and therefore untouchable.

Chernobyl Brain: Rules Are For Others

The Chernobyl disaster offers a powerful analogy for our current situation. What made that catastrophe so devastating wasn’t merely technical failure, but the Soviet organizational culture that enabled it: the casual bypassing of safety protocols, the dismissal of expert warnings, and the reckless improvisation during a sensitive procedure, all stemming from a Hegseth-like belief that catastrophic consequences simply wouldn’t apply to them.

When national security officials coordinate military strikes via a consumer device with a consumer OS and a consumer app on a consumer network, we’re witnessing a similar disregard for established protocols. The Germans recently learned this, as if anyone needs to be reminded the Russians are always listening to everything.

Just as Chernobyl operators manually overrode safety systems with a “we know better” attitude, today’s officials override digital safeguards by moving classified communications to platforms never designed for such use.

The most chilling parallel? The apparent belief that they are exempt from consequences. As Jeffrey Goldberg’s shocking report revealed, defense officials shared operational details, target information, and weapons packages WITHOUT OPSEC, likely and knowingly violating the Espionage Act in the process. When confronted about this breach, the official response demonstrated true Chernobyl Brain: “The thread is a demonstration of the deep and thoughtful policy coordination between senior officials. The ongoing success of the Houthi operation demonstrates that there were no threats to troops or national security.”

Uh, what?

This response echoes the initial Chernobyl reaction: nothing to see here during symptoms of meltdown; the system is still functioning; no real harm done. It reflects a worldview where security breaches are inconsequential as long as nothing immediately explodes, a very dangerous miscalculation of accumulating risk.

Titanic Legs: Unsinkable Hubris

The Titanic’s tragedy stemmed largely from a belief in its own invulnerability. Its operators ignored iceberg warnings and maintained speed in dangerous conditions, confident in their “unsinkable” vessel. The casualties were considered an acceptable risk – until they weren’t.

This same hubris manifests in the White House’s technology decisions. The casual implementation of Starlink, described by experts as “shadow IT, creating a network to bypass existing controls” shows misplaced confidence that borders on deadly arrogance. Even more telling is the bizarre implementation: Starlink dishes installed miles away from the White House, with the connection routed back through existing (tapped) fiber lines.

Why take this approach? Because they can create exposure and weakness for the Russians to exploit. Because consequences are flaunted. Because the rules that govern everyone, including federal records laws, classified communication protocols, and basic security principles, are treated as inconvenient obstacles to be challenged and ignored rather than essential safeguards.

When pressed about the inadequate Starlink safety a White House source dismissively explained that “the old was trash” as if an outrage of personal convenience justifies creating national security vulnerabilities. This mirrors the Titanic’s rejection of caution in favor of speed… right to the bottom of the ocean.

Consequences For Thee, Not For Me

What makes these security breaches particularly troubling is the clear double standards at play. The administration that campaigned on “lock her up” regarding weak communication protocols now coordinates military strikes via weak communication protocols. The same officials who emphasize borders for safety, routinely remove all the borders in technology.

This goes beyond carelessness because backed by the belief that consequences are for others. When the White House spokesperson defends the Starlink implementation by saying, “Just like the [insert any random name] did on numerous occasions, the White House is working to improve WiFi connectivity on the complex,” the message is clear: words have no meaning anymore because rules are no longer for those in power.

Improve?

The installation of parallel wireless systems creates security blind spots, monitoring gaps, and potential backdoors into sensitive networks. The use of commercial messaging apps on weak infrastructure for classified communications exposes operational details to potential interception. And most notable of all we have absolute proof the White House accepts lip service from Hegseth, when he’s obviously in breach of laws. Yet the attitude persists: we are untouchable; the damage to Americans won’t affect us when we move like Snowden to an apartment in Moscow.

From Recklessness to Disaster

Both Chernobyl and the Titanic demonstrate how quickly perceived invulnerability transforms into catastrophe. In both cases, the disaster wasn’t a bolt from the blue – it was the logical conclusion of accumulated shortcuts, ignored warnings, and systemic arrogance.

When officials treat national security infrastructure like a pig pen where established rules don’t apply to their mud slinging, they aren’t simply being careless, they’re setting the stage for predictable disaster. The accidental inclusion of a journalist in military planning didn’t lead to immediate catastrophe, thanks to the professionalism of that journalist, but it revealed a system where such accidents are not only possible but probable.

As one security expert noted regarding the Starlink implementation: “This is extra stupid to go satellite to fiber to actual site.” This isn’t the language of political disagreement, it’s the exasperation of true professionals watching rank muddy amateurs dismantle critical safeguards because they believe themselves immune to consequences.

Inevitable Reckoning

History teaches us that no one is truly untouchable, no matter how much they believe otherwise. The Titanic’s “unsinkable” reputation didn’t prevent it from sinking. Chernobyl’s operators’ confidence didn’t prevent catastrophic fallout.

The current approach to national security technology in bypassing established systems, ignoring expert warnings, and treating classified information casually, isn’t sustainable for another minute. These aren’t merely political choices; they’re fundamental security vulnerabilities that accumulate and worsen with time. Ask me about quantum threats in Signal.

When the inevitable breach occurs, when classified information is compromised (if not already), when military operations are exposed, when critical systems are penetrated, the consequences won’t be limited to those who created the vulnerabilities. Like Chernobyl’s radiation or the Titanic’s icy waters, the damage will spread far beyond those responsible.

Until the American people understand that no one is truly untouchable when it comes to security fundamentals, we remain on a collision course with consequences that no amount of privilege or power can deflect.

His OPSEC is a Lie: Hegseth Must Resign Now

Hegseth’s statement about being “clean on OPSEC” while simultaneously sharing sensitive military plans in an unsecured commercial app with an unvetted group that included a journalist shows a profound disconnect from reality.

1215ET: F-18s LAUNCH (1st strike package)
1345: ‘Trigger Based’ F-18 1st Strike Window Starts (Target Terrorist is @ his Known Location so SHOULD BE ON TIME – also, Strike Drones Launch (MQ-9s)
1410: More F-18s LAUNCH (2nd strike package)
1415: Strike Drones on Target (THIS IS WHEN THE FIRST BOMBS WILL DEFINITELY DROP, pending earlier ‘Trigger Based’ targets)
1536 F-18 2nd Strike Starts – also, first sea-based Tomahawks launched.
MORE TO FOLLOW (per timeline)
We are currently clean on OPSEC
Godspeed to our Warriors.

What’s particularly troubling is the contradiction between:

  1. Claiming to value operational security
  2. While completely failing to implement even basic security measures

The fact that detailed military strike plans were shared so casually, and that no one noticed an unauthorized participant for days, suggests either a complete lack of understanding about security protocols or a dangerous indifference to them, or both.

This kind of detachment from factual reality can be extremely dangerous in military contexts where lives depend on proper security procedures. History offers clear reminders of the consequences of OPSEC failures.

In 1961, the Bay of Pigs invasion collapsed partly because operational security was compromised—Cuban intelligence had detected preparations for the invasion, allowing Castro to mobilize and position his forces before the exiles even landed. The operation that was supposed to appear covert had become an open secret, with details appearing in newspapers like The New York Times days before the invasion.

Source: NYT

The impatient and sloppy approach demonstrated by Hegseth is especially concerning coming from senior defense leadership who should understand these historical lessons about the importance of protecting sensitive operational information.

It raises serious questions about competence and whether there’s a culture of saying the right words about security while ignoring the actual implementation of security measures.