Wiz DeepSeek “Research” Ignores Ethical Line – And They Know It

The security industry has a problem. There are growing concerns about potential ethical boundaries in cybersecurity research, particularly regarding Wiz’s approach to vulnerability disclosure.

The recent blog post by Wiz is a good example. They basically lay out unethical intrusion, a targeted operation without authorization with hallmarks of military intelligence tactics, as if it’s just a happy story about what they apparently perceive as their new normal within political discourse.

Their notification to the world of DeepSeek’s exposed database does not read to me as a researcher disclosing a vulnerability. The circumstances surrounding this disclosure raise questions about the motivations and methods behind such aggressive security research, given Israeli ex-special forces who run Wiz might do the “wildest things” to DeepSeek.

After you leave [Israel’s secret military technology unit of Special Operations — Unit 81] you realize that up until now you did the wildest things in your life together with the most talented people and you want to extend that into civilian life.

To set context here, I was the head of security for one of the largest and most successful database companies in history. I’m not exaggerating when I say I’ve had to deal with literally tens of thousands of reports like the Wiz blog post, handling some of the most sensitive data and hundreds of researchers. I’m applying that lens. Also I have spent the better part of three decades as an authorized penetration tester myself, reporting vulnerabilities I’ve found, following an academic background (graduate degree and thesis) in philosophy and history of military intervention (e.g. Mission 101), unconventional warriors and insertions through and behind enemy lines.

“Intel mapping” operations have been particularly difficult for the [Israeli] army to justify on any kind of security grounds. That led earlier this year to unwelcome scrutiny from Israel’s top court, which gave the army until August to divulge the wording of its “mapping” protocol. The army’s cancellation of the practice last week means [secrecy remains for] purposes behind these random “mapping” raids. They are part of the gradual process by which the army acculturates its young soldiers into a life of committing habitual war crimes. It breaks down their sense of morality and any remnants of compassion…. It turns [targets] into nothing more than objects of suspicion and fear for the soldiers. Or as one Palestinian woman told Yesh Din: “The way they banged and came into the house was like entering somewhere with animals, not people.”

The details of this Wiz incident merit careful review by industry regulators to ensure compliance with established ethical standards in security research.

Therefore I will try to be as clear as possible in layman’s terms about what Wiz themselves admit: security experts specifically targeted DeepSeek to push without prior authorization into systems to see anything they know they shouldn’t see. Wiz staff say in their blog post it was a rising company in the news that gave them motivation to break in and find something to publish publicly as damaging or embarrassing. Upon finding a door at this targeted company, they checked if it was locked, and then entered and looked around at anything they could, landing and expanding. As soon as they found the first thing, they say they tried more, and more and more until they felt they had quite a lot of evidence and details. Perhaps they have even more than they reveal.

In a normal ethical research scenario things unfold very differently to such a bizarrely tone-deaf admission of unethical forced entry and gathering.

Usually, I expect a researcher comes upon a random door they don’t understand or recognize. They check if it’s locked because they don’t know what they have found or even why it’s there. An Internet address has a port, the port seems to be listening to commands. Maybe it’s supposed to be open? Maybe they were even invited to use it but have stumbled upon something they didn’t expect or want to see? When they use a command to understand what’s going on they find something they know they should report to the owner, full stop. If they realize at first glance this door should have been locked, they stop and don’t need to go further, hopefully for obvious reasons. Instead their efforts center on ways to notify the owner to take action based on first discovery, even with a note that more discovery may have been possible but not attempted. A good deed done is one that doesn’t go too far, and certainly one that doesn’t come with intentions of capitalizing on repeatedly intentionally overstepping boundaries up front.

Wiz did the exact opposite to simple research principles, unleashing a complicated and extensive series of intrusions, which to me looks like they are engaged in espionage that has more in common with military intelligence operations than civilian.

Think of them as literal Israeli mercenaries who were trained to rush into a house using their honed special operations technology to run from room to room mapping an entire place as quickly as possible to establish dominance over that target on the presumption of power transfer. Once they find the bedroom they yank out the drawers and find a diary. They scan and upload diary pages… and post findings to a high-profile website with a warning: “Good thing we aren’t the bad guys, these fine residents of 14 Abdallah Street should have known better than to leave any doors, drawers or books unlocked, as you can see from the private thoughts in their diary we copied. After all, what if some bad guys showed up and walked in without asking first?”

  • Executed SQL queries to examine database structure
  • Accessed and documented sensitive customer data and chat histories
  • Mapped internal APIs and backend systems
  • Published technical details about DeepSeek’s infrastructure

This wasn’t some accidental discovery requiring information gathering to achieve a responsible disclosure. This wasn’t even a humanitarian mission that tries to self-justify by exposing danger to prevent harm. The methodology employed appears to push beyond conventional boundaries of responsible vulnerability research, raising significant ethical concerns within the cybersecurity community. Intelligence gathering of a nation-state disguised as private security research of course already has been problematic to the industry. Wiz knew exactly what they were looking for and it sounds like they gleefully documented everything they found while they also blew through stop sign after stop sign, grabbing DeepSeek by the… IP.

If you are a Wiz customer you should immediately ask whether you are safe from such behavior turned on you, given what Wiz reveals about their management ethics. Customers should carefully evaluate the potential implications of a security research approach that seems to extend beyond traditional ethical guidelines. Did Lance Armstrong admit to cheating when confronted? Asking for a cyclist friend right now evaluating Wiz for a large enterprise deal.

Timing of their headline-grabbing braggadocio exploitation of a targeted company is particularly concerning given an ongoing lawsuit against Wiz that highlights very similar issues in the recent past. Orca Security, a leader in the space that Wiz abruptly entered and became strangely proficient in without explanation, alleges that Wiz engages in a clear pattern of acquiring confidential information without authorization.

…Wiz has hired former Orca employees and worked with third parties to acquire Orca’s confidential information relating to current and future product plans, marketing, sales, prospective customers, and prospective employees, and has used that confidential information in furtherance of its efforts to copy and to compete unfairly with Orca.

The latest incident with DeepSeek not only follows the same playbook – finding a vulnerability, then using it to gather competitive intelligence while claiming “responsible” IP-grabbing – it shows that Wiz named themselves the Wizard of Oz as more than just a silly aspiration to run the world. They appear to hold themselves unaccountable to basic ethics let alone laws that prevent such business practices.

Wiz attempts to justify their actions by stating “We did not execute intrusive queries beyond enumeration to preserve ethical research practices.” This statement is worse than being a meaningless distinction, it’s an attempt to destroy our actual notions of unauthorized access. Admittedly a nation-state funding mercenaries to break-and-enter private property in times of “war” has a different authorization model. But Wiz appears to be trying to pass themselves off as civilans after jumping out of an airplane at 30K feet, as if an adrenaline-fueled Unit 81 “untouchable” halcyon buzz can work if you put on the wrong pants.

Shout out to Agent Zo.

A query against an unauthorized system is an intrusion, a query against a known system is a known unauthorized intrusion, an exploratory query seeking exposure of private data is a known unauthorized intrusion to violate rights. Claiming extensive steps in “enumeration” isn’t “intrusive” is like saying “I just read all the papers in the diary I found in the drawer in your dresser in your bedroom in your house, but don’t worry I didn’t take anything and you really should see someone for that drinking problem I read all about.

The security industry needs to call this behavior what it is and prevent it being normalized. I suspect Wiz doesn’t even want what Wiz is, and what Wiz does. Self-defense and power (the sword) demands balance with moral and spiritual values (the light), seeking the ambivalent middle rather than a false choice between one or the other (Talmud Shabbat 21b).

Walking to the back of an office building under a giant bright sign that says Wiz and finding an unlocked door doesn’t give you the right to walk in, rifle through all their private information, and then claim you’re helping Wiz by pointing out the unlocked door by describing the papers you found. You know it’s Wiz before you even test the door. So the moment you test the door and find it unlocked that’s it! You tell Wiz and they tell you if that’s ok. Like walking in the front door and saying “hello, I’m here to see…” so they can say ok or no, instead of jumping over the front desk and making a run for it. There’s absolutely no reason to go so far so fast to look around and see what can be exposed when you know that’s not your job. You stop where everyone should know you stop, at first notice you are someplace you shouldn’t be. Thus to me at least, as well as the many others asking me about it, Wiz crossed clear ethical and likely legal lines, then published about it as if to say “look what an imbalanced sword can do.”

Such behavior damages the credibility of legitimate security research. Real security researchers get prior authorization from targeted assets. Real security researchers who chance upon unknown assets that are exposed approach it with the professional minimum necessary to find and notify owners without need for over-accessing or over-exposing sensitive data. What Wiz did wasn’t research by their own admission of detailed exposure of ethics failures.

That’s basically it. I ask you to consider their unauthorized access and intelligence gathering with that explanation. And if you read Wiz corporate history, you can fit it into a pattern that led to formal documentation for judges and courtrooms.

The security industry can’t keep looking the other way when one of their own repeatedly crosses the line and abuses the people we are supposed to protect. Americans should ask what would a Silas Soule do if he saw these acts of immorality as if facing a digital Sand Creek? If we want security research to be taken seriously, we need to firmly reject this kind of “bad Stetson” behavior that wants to make disclosure indistinguishable from illegal corporate espionage. Security researchers build trust by holding a bright line on ethical disclosure, which means they recognize exploitation and unauthorized data extraction undermines the entire industry.

The track record of Wiz needs to lead to accountability, not apologetic investments.

Their actions with DeepSeek continue a documented concerning pattern of crossing ethical lines while thumbing their nose at basic security principles. Just because you can access something on a target doesn’t mean you should. The security community deserves better than this.

It’s time to have an honest conversation about where legitimate security research ends and unauthorized access begins. Wiz’s boasting about how they get away with things nobody should be doing, amid mounting allegations of espionage described in the Orca lawsuit, inform us of a company that operates intentionally without concern for the harms they cause others, presumably including their own customers.

Wiz’s Actions Legitimate Security Research
Intentionally targeted DeepSeek for any high-value exposure Discovery is authorized as scoped or it is a matter of professional routine, or it is accidental and doesn’t blur them
Deliberately pushed hard into a high-value target without authorization Minimal access (least necessary for appropriate notification)
Conducted extensive reconnaissance with API mapping and SQL queries to gain customer data access Focus on accuracy for notification, not extraction and exploitation for compromise
Published technical details about targeted corporate espionage, threatening industry research integrity Prior authorization is documented for targeted research, following code of ethics, and principles of actual least-harm are documented within routine and broad/generic research steps

Musk of Sedition: Why Attacks Inside American Government Smell Like North Korea

Today’s CNN report about suicidal North Korean soldiers in Ukraine should terrify anyone who understands institutional collapse.

I’ve spent decades studying how societies descend into authoritarianism and as a security professional, I’m watching patterns that I know all too well emerge at unprecedented speed in American institutions.

Consider what we’re seeing in Ukraine: young North Korean soldiers carrying handwritten loyalty pledges, documenting each other’s “disloyalty,” removing protective gear to prove dedication, and detonating grenades rather than being captured.

A handwritten page found on one of the North Korean soldiers recorded acts of disloyalty by North Korean subordinates. Rebecca Wright/CNN

These aren’t just tactical choices – they’re the end result of a system that values loyalty to false prophets above all else, including human life.

Now look at what’s happening in American federal institutions. The Office of Personnel Management is installing new centralized communication systems that shatter decades of security protocols. Career civil servants are being illegally replaced by startlingly young loyalists. Traditional agency independence is being deliberately dismantled.

These parallels aren’t subtle to an expert in authoritarian dangers.

Here’s what makes this moment uniquely dangerous, requiring additional expertise in cybersecurity: technology is accelerating institutional collapse beyond anything we’ve seen in history.

Radio codes found on one of the North Korean soldiers. Rebecca Wright/CNN

When Mao deployed Red Guards, when Stalin conducted his purges, when the Shah’s SAVAK began its campaigns – these transformations took years. Today, a centralized email system can expose every federal employee to loyalty tests instantly. Social media can identify and target “disloyal” staff within hours by running a single query statement like “DEI”. A teenager with an assault rifle can be placed in charge of critical systems with a single administrative decision.

By the time most people recognize automation of decline and destruction, the professional expertise needed to prevent catastrophic steps – like a button-click to end hundreds of thousands of lives – already may be done and unrecoverable.

When Twitter’s $44B purchase led to 80% value destruction, pundits laughed at Elon Musk as incompetent and cruel. They missed his actual intentions dog-whistled by him for years.

Hitler’s 1933 ‘Volksempfänger’ program was giving away radios at a 75% loss to destroy democracy and replace it with Nazi adherents. Both sacrificed billions to gain control of communication infrastructure, celebrating deceptive and illegal “exit package” tactics meant to accelerate end of freedom.

Seemingly “bad business” decisions of massive devaluation and loss make perfect sense when viewed as evil charity – tools for rapid institutional control and cult-like loyalty enforcement rather than profit-seeking ventures. The toxic exit packages are institutional suicide pills, similar to how Hitler’s “Night of Long Knives” eliminated opposition through emphasis on rapid “exits.”

The new appointees – averaging 29 years old compared to the typical 52 – are specifically being selected to lack the knowledge that would recognize catastrophic risks someone wants them to make… again (e.g. MAGA). When a 26-year-old was placed in charge of nuclear command protocols they didn’t understand how keeping authentication systems separate from general communications networks is critical to safety – literally the most famous catastrophic design flaw in all hacker history (e.g. 1983 NORAD near-miss and the infamous 2600 phreakers).

The patterns are clear: when loyalty becomes the only metric that matters, when youth are elevated specifically because they lack the judgment to resist, when technology enables instant implementation of control systems – you’re watching the death of professional judgment and institutional knowledge in real time.

Some will say this analysis is alarmist. They’ll say American institutions are resilient. They’ll say we’ve survived previous challenges. But they’re missing how technology has changed the game. The speed of institutional collapse in the digital age isn’t even comparable to historical examples that were measured in months and years. We don’t have the luxury of analog and physical warning signs.

The North Korean soldiers show us exactly where America is headed at warp speed because, unlike their 1980s view of the world, we are throwing $500 Billion at AI “end of society” announcements: young people primed to throw away lives based on loyalty tests alone, unable to adapt or think independently, following long-outdated patterns even as they die.

The time to recognize deadly devotion to loyalty over competence, to recognize the prioritization of control over effectiveness, is before it becomes irreversible. History is clear on this point: once institutional knowledge is purged, once professional judgment to protect lives is replaced by suicidal loyalty tests, once the young and inexperienced are given authority specifically because they lack the context to resist – the rushed slide into full institutional collapse becomes nearly impossible to stop. Even physical coercion becomes digital:

[Czechoslovakian] President Hácha was in such a state of exhaustion that he more than once needed medical attention from the [Nazi] doctors, who, by the way, had been there ready for service since the beginning of the interview. […] At 4:30 in the morning, Dr. Hacha, in a state of total collapse, and kept going only by means of injections, resigned himself with death in his soul to give his signature [for Hitler to seize power and invade].

We need to name what we’re seeing. This isn’t normal administrative change. This isn’t partisan politics as usual. This is the deliberate installation of North Korean-style loyalty systems in American institutions, accelerated by technology to a speed we’ve never seen before in human history.

The question isn’t why Trump regularly praises authoritarian leaders including North Koreans and what he would do to be like them – history has answered such questions too many times to count. The question is whether enough people recognize it right here and right now to prevent America’s institutions from following North Korea’s path towards youth rushing to blow themselves up and take down democracy, just to prove their absolute loyalty to Musk and his assistant Trump.

Tesla design failures allegedly cause an unpredictable veering into trees and poles, causing catastrophic fires that trap occupants and kill them. Three young Piedmont students were burned to death in their brand new Cybertruck… among the nearly two dozen people tragically killed in their Tesla “Swasticars” in October and November of 2024 alone. Image source: Harry Harris
Swasticars: Remote-controlled explosive devices (REDs) stockpiled by Musk outside Berlin.

Nepenthes: Aggressive Anti-AI Malware Burns Bots

Some openly warn they just want to watch AI burn, and the robots come knocking anyway.

Aaron clearly warns users that Nepenthes is aggressive malware. It’s not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an “infinite maze” of static files with no exit links, where they “get stuck” and “thrash around” for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models. That’s likely an appealing bonus feature for any site owners who, like Aaron, are fed up with paying for AI scraping and just want to watch AI burn.

Or as the warning label puts it…

THIS IS DELIBERATELY MALICIOUS SOFTWARE INTENDED TO CAUSE HARMFUL ACTIVITY. DO NOT DEPLOY IF YOU AREN’T FULLY COMFORTABLE WITH WHAT YOU ARE DOING.

Trained Vulnerability: Trump Demands Federal Staff Fall Victim to Attacks

Let’s just call this “trained vulnerability,” the kind usually found in authoritarian regimes that demand suicide as a loyalty test. Recent policy changes at the Office of Personnel Management (OPM) are trying to condition federal employees to step on a landmine (fall victim to common attack patterns).

No, really.

Two federal employees are suing the Office of Personnel Management (OPM) to block the agency from creating a new email distribution system — an action that comes as the information will reportedly be directed to a former staffer to Elon Musk now at the agency.

The suit, launched by two anonymous federal employees, ties together two events that have alarmed members of the federal workforce and prompted privacy concerns.

That includes an unusual email from OPM last Thursday reviewed by The Hill said the agency was testing “a new capability” to reach all federal employees — a departure from staffers typically being contacted directly by their agency’s human resources department.

Also cited in the suit is an anonymous Reddit post Monday from someone purporting to be an OPM employee, saying a new server was installed at their office after a career employee refused to set up a direct line of communication to all federal employees

Under the guise of administrative efficiency, new directives are dismantling years of security awareness training and creating an environment for phishing attacks to be indistinguishable from official communications.

That’s how dictatorship works.

The implementation of a new centralized email system without any proper safety, means big trouble for America right here and now. Traditional federal IT security relied on distributed agency isolation as safety from abuse, with each department maintaining its own communication channels and employee databases. The new system shatters national security protections by creating cross-agency communication channels without baseline security controls or Privacy Impact Assessments. There’s no balance, there’s no resilience, there is only pull the pin and shout dear leader’s name in a “blaze of glory” mindset associated with Nazi Germany, the Hitlerjugend, and… Elon Musk.

A light-touch booklet originally released by Imperial War Museum (UK), then republished by Ballentine (US) in 1971. Considered a collectible by Nazi supporters.
Source: Twitter

The conditioning for compromise is both systematic and comprehensive. Federal employees are instructed to respond to emails from unfamiliar systems, confirm private details to “test” messages, and accept administrative requests from outside their agency’s normal channels. This mirrors common attacks so closely that distinguishing legitimate requests from threats becomes impossible.

From a technical perspective, the reported low-quality setup creates an environment ripe for adversarial exploitation. Any attacker can replicate a “legitimate” system now by setting up a mail server, as official communication patterns match known phishing techniques. When official policy demands behavior that matches attack signatures, the ability to detect and prevent compromises is toast.

This situation represents more than just poor security practice – it’s an active degradation of federal safety, like a neon sign over DC saying “we always click on everything”. The implementation of this system sets a dangerous precedent where administrative policy actively undermines common sense, let alone basic security practices. The challenge lies in protecting systems where threat actors and administrators were intentionally made indistinguishable from each other.

And the person installing the mail server, running the federal government? A child reporting to Elon Musk, literally an incompetent minor.

Sources tell WIRED that the OPM’s top layers of management now include individuals linked to xAI, Neuralink, the Boring Company, and Palantir. One expert found the takeover reminiscent of Stalin. …graduated from high school in 2024, according to a mirrored copy of an online résumé and his high school’s student magazine; he lists jobs as a camp counselor and a bicycle mechanic among his professional experiences, as well as a summer role at Neuralink.