Did Wiz Just Burn Their Mole by Reporting Microsoft’s AI Leak?

Numerous inquiries have been directed to me regarding a recent Microsoft incident related to AI data loss, with several individuals seeking my expert commentary.

At this juncture, it is prudent to exercise caution in offering definitive statements, as the situation remains in its early stages. Nevertheless, certain peculiarities have come to light concerning the actions of a company bearing the somewhat uncomfortably comical name of “Wiz” that claims to be obsessed with leaks.

To provide context, it is imperative to bring into the open that Wiz has endured a less-than-stellar reputation within the security product community, with a tarnished standing alluded to in hushed tones among industry veterans. Allegations of aggressive and unethical practices, ostensibly prioritizing unsustainable growth at the expense of others, have rendered Wiz akin to a boisterous interloper within the realm of composed security professionals who prefer to protect and serve quietly.

The ongoing legal entanglement between Wiz and Orca serves as a glaring manifestation of the broader industry concerns, as reported in CSO Online.

Israeli cybersecurity startup Orca Security is suing local cloud security rival Wiz for patent infringement, alleging that its success and growth is built on “wholesale copying.” Orca has accused Wiz of taking its “revolutionary inventions” and creating a “copycat cloud security platform,” improperly trading off of Orca’s creations, according to a lawsuit filed in the US District Court, District of Delaware. “This copying is replete throughout Wiz’s business and has manifest in myriad ways,” it added.

Orca was founded in 2019 by Israeli-born cybersecurity technologist Avi Shua. In the four years since its founding, Orca has raised substantial investment funds and grown from fewer than a dozen to more than 400 employees today. In 2022, Orca was the recipient of Amazon Web Services Global Security Partner of the Year Award. Wiz was founded in January 2020, a year after Orca, by Assaf Rappaport, Ami Luttwak, Yinon Costica, and Roy Reznikthat, a team that previously led the Cloud Security Group at Microsoft.

Orca is clearly pissed at Wiz (pardon the pun). Also Orca seems much more openly defensive than others who suspect foul play.

Going beyond surface-level scrutiny, further analysis unveils Wiz’s founders served together as intelligence operatives in Israel before being employed by Microsoft security. This introduces three significant dimensions to the unfolding narrative.

Firstly, it is worth acknowledging that Wiz founders have a competitive mindset honed in the crucible of combat intelligence activities. Their background as government-trained special operatives from a nation marked by profound existential concerns lends a distinctive perspective that may not readily align with conventional notions of legal compliance, fairness, or the civility inherent to civilian products.

In this sense anyone maintaining some air of formality with humility could serve as an important countermeasure against extrajudicial transgressions, as advocated by seasoned espionage veterans. Regrettably, Wiz’s public image constantly appears to reflect the diametrically opposed disposition of ostentatious self-promotion:

After you leave [Israel’s secret military technology unit of Special Operations — Unit 81] you realize that up until now you did the wildest things in your life together with the most talented people and you want to extend that into civilian life.

Secondly, legal experts have highlighted a recourse against lawyers who exhibit unethical behavior is at least feasible through professional reporting mechanisms. Observant lawyers can step forward when rules of conduct are violated, thereby enabling courts to impose sanctions. While this is by no means perfect, it begs the question: What other avenues are available to security professionals, beyond resorting to protracted legal battles — an undeniably formidable hurdle? FTC are you reading this?

Lastly, it is imperative to note the intersection of Wiz’s history as staff in Microsoft security, a facet that adds further complexity to the unfolding narrative about Wiz targeting Microsoft security. In a professional context, it is not my assertion that Wiz has informants within Microsoft who provide insider tips on where to poke for soft or bad configurations. No. Rather, I am raising a question regarding the possibility of such an occurrence.

As part of the Wiz Research Team’s ongoing work on accidental exposure of cloud-hosted data, the team scanned the internet for misconfigured storage containers. In this process, we found a GitHub repository under the Microsoft organization named robust-models-transfer. The repository belongs to Microsoft’s AI research division… an attacker could have injected malicious code into all the AI models in this storage account, and every user who trusts Microsoft’s GitHub repository would’ve been infected by it.

However, it’s important to note this storage account wasn’t directly exposed to the public; in fact, it was a private storage account. The Microsoft developers used an Azure mechanism called “SAS tokens”, which allows you to create a shareable link granting access to an Azure Storage account’s data — while upon inspection, the storage account would still seem completely private.

The degree of privacy associated with this storage account is a noteworthy aspect, buried by Wiz far below headlines. Their report starts out saying “ongoing work on accidental exposure” as if in the public, and then quietly pivots to the fact that the accident was extremely private. It prompts one to contemplate whether an initial investigator of a secret accident may, in fact, be on the scene for a special and unique private reason.

Let me also be clear about Wiz research posting a fair critique of Microsoft’s overarching security model. The report is actually on target even though “outside the box” (מבצע מחוץ לקופסה), like an air force jet that somehow miraculously flies straight through extensive defenses and hits an exact target inside the box.

As the old joke goes, Iran tries to rattle Israel by revealing 2 million soldiers ready to fight. Israel backs them down saying “we already knew, and we’re not ready to feed 2 million prisoners”.

The Wiz recommendation to establish dedicated storage accounts for external sharing, limiting the potential impact of accidentally over-privileged hidden long-term tokens to external data, is very astute and prudent, especially within the realm of trustworthy AI.

…we recommend creating dedicated storage accounts for external sharing, to ensure that the potential impact of an over-privileged token is limited to external data only.

Amen to that! This is something Microsoft definitely needs to hear, and their customers must demand more. A lack of segmentation coupled with a lack of granular token revocation on an opaque centralized system is a dangerous mix. There are much better ways.

Wiz correctly brings focus to data storage methods for AI, which should be personally dedicated instead of easily over-shared. A proper multi-user distributed model for AI data controls greatly reduces the looming danger of big privacy and integrity breaches. I agree with their analysis here.

And then, unfortunately…

While I concur with many technical details of the Wiz blog, I also discern a certain amount of unwarranted self-serving sensationalism. It is not my intention here to downplay intellectual acumen of the Wiz team. They are ex-intelligence, after all, trained by some of the best in the world to be the best at exploits and communication.

Their discussion could have stayed sharp and relevant to emerging data platform safety, touching on AI as a direct beneficiary of more distributed safety models. I’m definitely not stepping in the way if that was where they stayed, especially the “lack of monitoring capabilities” on centralized proprietary data stores raising a problem of secret “privileged tokens as a backdoor”.

Yet instead I observed their extensive blog post wandering into amateurishly immodest places, pressing readers hard (especially Microsoft customers) to jump into Wiz’s sales funnel.

Clumsy. Lazy. Arrogant.

A comparison of Wiz’s blog tone and content with Microsoft’s official remarks on the exact same topic highlights some very uncomfortable disparities.

No customer data was exposed, and no other internal services were put at risk because of this issue. No customer action is required in response to this issue. […] Our investigation concluded that there was no risk to customers as a result of this exposure.

Microsoft admits a bad token event and then unequivocally asserts no customer data was exposed, no other internal services jeopardized. They further state that no customer action is required. Their conclusion is no risk to customers from the configuration error.

These key statements (pardon the pun) are conspicuously absent from Wiz’s hyperventilated extra analysis, which seems designed to draw attention for revenue.

Microsoft’s narrative keeps risk in perspective as it discusses why configuration errors occur and emphasizes this issue was promptly addressed upon discovery. They suggest the issue be used as an educational opportunity, sharing internal/private discussions to facilitate learning. Again maybe it is pertinent to note it was ex-employees of Microsoft security who established Wiz to directly compete with Microsoft security.

GitHub’s secret scanning service monitors all public open-source code changes for plaintext exposure of credentials and other secrets. This service runs a SAS detection, provided by Microsoft, that flags Azure Storage SAS URLs pointing to sensitive content, such as VHDs and private cryptographic keys. Microsoft has expanded this detection to include any SAS token that may have overly-permissive expirations or privileges.

The slight extension of Microsoft’s scanning is thus the actual critical development in this context. And I don’t see that detail mentioned at all by Wiz, as they try to sell their own scanner.

Wiz instead appears to emphasize most how they alone came into possession of Microsoft’s private and sensitive personal data, including passwords, secret keys, and internal Microsoft Teams messages, insinuating a very deep level of safety beach.

SO SCARY. Buy! The Wiz blog literally says “we recommend” and then… their product.

According to sources, Wiz purportedly received its moniker from former Israeli military intelligence personnel who regarded themselves as omnipotent “Wizards.” This appellation draws inspiration from the popular story “Wizard of Oz” that underscores the vulnerability of Americans, who can be easily swayed by fear into funneling their finances and backing into deceptive technologies and detrimental sources of authority. It highlights a mastery of crafty maneuvers and superficial victories over genuine substance.

To clarify, I am not alleging the existence of an informant as required to make the Wiz scanning service somehow more intelligent, effective and targeted than Microsoft’s own competing services. I am merely raising questions regarding the actions of former Microsoft security employees who have entered into direct competition with their previous employer and are often tangled with Microsoft security seemingly from within while being outside.

The allegations of the Orca lawsuit as well as murmurs within the security industry about Wiz’s unsavory business practices do bring to mind potential further investigations of how Wiz really “researches” flaws.

At the end of the day a CISO comparing the Microsoft and Wiz versions of the same story should be wondering whether they should risk engaging with offensively oriented braggadocios, given there also are many more compelling professionals (quiet, modest and effective) who are available and trustworthy.


On Robots Killing People

The robot revolution began long ago, and so did the killing. One day in 1979, a robot at a Ford Motor Company casting plant malfunctioned—human workers determined that it was not going fast enough. And so twenty-five-year-old Robert Williams was asked to climb into a storage rack to help move things along. The one-ton robot continued to work silently, smashing into Williams’s head and instantly killing him. This was reportedly the first incident in which a robot killed a human; many more would follow.

At Kawasaki Heavy Industries in 1981, Kenji Urada died in similar circumstances. A malfunctioning robot he went to inspect killed him when he obstructed its path, according to Gabriel Hallevy in his 2013 book, When Robots Kill: Artificial Intelligence Under Criminal Law. As Hallevy puts it, the robot simply determined that “the most efficient way to eliminate the threat was to push the worker into an adjacent machine.” From 1992 to 2017, workplace robots were responsible for 41 recorded deaths in the United States—and that’s likely an underestimate, especially when you consider knock-on effects from automation, such as job loss. A robotic anti-aircraft cannon killed nine South African soldiers in 2007 when a possible software failure led the machine to swing itself wildly and fire dozens of lethal rounds in less than a second. In a 2018 trial, a medical robot was implicated in killing Stephen Pettitt during a routine operation that had occurred a few years earlier.

You get the picture. Robots—”intelligent” and not—have been killing people for decades. And the development of more advanced artificial intelligence has only increased the potential for machines to cause harm. Self-driving cars are already on American streets, and robotic "dogs" are being used by law enforcement. Computerized systems are being given the capabilities to use tools, allowing them to directly affect the physical world. Why worry about the theoretical emergence of an all-powerful, superintelligent program when more immediate problems are at our doorstep? Regulation must push companies toward safe innovation and innovation in safety. We are not there yet.

Historically, major disasters have needed to occur to spur regulation—the types of disasters we would ideally foresee and avoid in today’s AI paradigm. The 1905 Grover Shoe Factory disaster led to regulations governing the safe operation of steam boilers. At the time, companies claimed that large steam-automation machines were too complex to rush safety regulations. This, of course, led to overlooked safety flaws and escalating disasters. It wasn’t until the American Society of Mechanical Engineers demanded risk analysis and transparency that dangers from these huge tanks of boiling water, once considered mystifying, were made easily understandable. The 1911 Triangle Shirtwaist Factory fire led to regulations on sprinkler systems and emergency exits. And the preventable 1912 sinking of the Titanic resulted in new regulations on lifeboats, safety audits, and on-ship radios.

Perhaps the best analogy is the evolution of the Federal Aviation Administration. Fatalities in the first decades of aviation forced regulation, which required new developments in both law and technology. Starting with the Air Commerce Act of 1926, Congress recognized that the integration of aerospace tech into people’s lives and our economy demanded the highest scrutiny. Today, every airline crash is closely examined, motivating new technologies and procedures.

Any regulation of industrial robots stems from existing industrial regulation, which has been evolving for many decades. The Occupational Safety and Health Act of 1970 established safety standards for machinery, and the Robotic Industries Association, now merged into the Association for Advancing Automation, has been instrumental in developing and updating specific robot-safety standards since its founding in 1974. Those standards, with obscure names such as R15.06 and ISO 10218, emphasize inherent safe design, protective measures, and rigorous risk assessments for industrial robots.

But as technology continues to change, the government needs to more clearly regulate how and when robots can be used in society. Laws need to clarify who is responsible, and what the legal consequences are, when a robot’s actions result in harm. Yes, accidents happen. But the lessons of aviation and workplace safety demonstrate that accidents are preventable when they are openly discussed and subjected to proper expert scrutiny.

AI and robotics companies don’t want this to happen. OpenAI, for example, has reportedly fought to “water down” safety regulations and reduce AI-quality requirements. According to an article in Time, it lobbied European Union officials against classifying models like ChatGPT as “high risk” which would have brought “stringent legal requirements including transparency, traceability, and human oversight.” The reasoning was supposedly that OpenAI did not intend to put its products to high-risk use—a logical twist akin to the Titanic owners lobbying that the ship should not be inspected for lifeboats on the principle that it was a “general purpose” vessel that also could sail in warm waters where there were no icebergs and people could float for days. (OpenAI did not comment when asked about its stance on regulation; previously, it has said that “achieving our mission requires that we work to mitigate both current and longer-term risks,” and that it is working toward that goal by “collaborating with policymakers, researchers and users.”)

Large corporations have a tendency to develop computer technologies to self-servingly shift the burdens of their own shortcomings onto society at large, or to claim that safety regulations protecting society impose an unjust cost on corporations themselves, or that security baselines stifle innovation. We’ve heard it all before, and we should be extremely skeptical of such claims. Today’s AI-related robot deaths are no different from the robot accidents of the past. Those industrial robots malfunctioned, and human operators trying to assist were killed in unexpected ways. Since the first-known death resulting from the feature in January 2016, Tesla’s Autopilot has been implicated in more than 40 deaths according to official report estimates. Malfunctioning Teslas on Autopilot have deviated from their advertised capabilities by misreading road markings, suddenly veering into other cars or trees, crashing into well-marked service vehicles, or ignoring red lights, stop signs, and crosswalks. We’re concerned that AI-controlled robots already are moving beyond accidental killing in the name of efficiency and “deciding” to kill someone in order to achieve opaque and remotely controlled objectives.

As we move into a future where robots are becoming integral to our lives, we can’t forget that safety is a crucial part of innovation. True technological progress comes from applying comprehensive safety standards across technologies, even in the realm of the most futuristic and captivating robotic visions. By learning lessons from past fatalities, we can enhance safety protocols, rectify design flaws, and prevent further unnecessary loss of life.

For example, the UK government already sets out statements that safety matters. Lawmakers must reach further back in history to become more future-focused on what we must demand right now: modeling threats, calculating potential scenarios, enabling technical blueprints, and ensuring responsible engineering for building within parameters that protect society at large. Decades of experience have given us the empirical evidence to guide our actions toward a safer future with robots. Now we need the political will to regulate.

This essay was written with Bruce Schneier, and previously appeared on Atlantic.com.

Quoted in “Enterprises will test the limits of LLMs”

A new interesting report by Jon Reed has a lot of thought about use of LLMs in the Enterprise. He included the following comment from me, calling it “spicier” than others:

As per my assertion that ChatGPT won’t adapt well to the enterprise, Ottenheimer responded:

I agree with this line of thinking. Your belief is well-founded in what probably should be described as post-enlightenment thinking. The post-Newton – or perhaps post-Hume – scientific rigor of discarding things that are probabilistically false helps Enterprises navigate towards a progressive upside of profit without causing harm. The very nature of an Enterprise business model is that it operates using efficiencies, better known as regulations, to avoid costly known bad behaviors. Academic approaches to data lean towards overly “wide spectrum” or “both sides” doctrines, dragging everything and anything into view yet eschewing accountability for errors. ChatGPT has had some very problematic missteps and hasn’t yet proven it won’t blindly drive an Enterprise into some wasteful, entirely avoidable accidents and real harm.

MIT Study: RoboTaxis Ruin Cities

From the outset, it was evident to anyone with basic knowledge of urban dynamics that Uber would have negative effects on cities. It’s not a matter of complex calculations; it’s a fundamental principle of urban transportation. Increasing the number of cars on the road leads to more problems.

The same common sense reasoning suggests that RoboTaxis will likely exacerbate these issues.

Even MIT has acknowledged its mistake in endorsing Uber, as they were so juiced on Utopia they failed to recognize the glaring warning signs.

This utopian vision was not only compelling but within reach. After publishing our results, we started the first collaboration between MIT and Uber to research a then-new product: Uber Pool (now rebranded UberX Share), a service that allows riders to share cars when heading to similar destinations for a lower cost.

One thing that really bothers me is when people discuss Utopia like this as if they are working on a tangible reality. The term itself, derived from Latin, essentially means to go nowhere. The term was coined by Sir Thomas More in 1516 in his book of the same name. It is a combination of two Greek words, “ou” (meaning “not”) and “topos” (meaning “place”), which roughly translates to “no place” or “nowhere.” MIT’s “utopian vision” of cars was going nowhere, by definition!

Considering this, MIT might want to consider reimbursing students’ tuition fees if they’re teaching that achieving utopian vision is feasible, and the end is neigh. Honestly, one might as well attend church for free instead of paying stupid money to MIT, if it’s just study of compelling unattainable beliefs.

MIT’s fixation on the dogma of revolution through detached-STEM thinking (pun intended) totally neglected the intricate, real-world dynamics of human behavior within complex systems. This classic mistake is exactly what prompted formation of a Fabian Society in London in 1884, an anti-upheaval (gradualism of social reform) organization that successfully has handled such issues for a significant stretch of time.

In an article brimming with STEM remorse about falling for the false profits of Uber (pun intended), MIT emphatically cautions now against getting involved with RoboTaxis.

Our research was technically right, but we had not taken into account changes in human behavior. Cars are more convenient and comfortable than walking, buses and subways — and that is why they are so popular. Make them even cheaper through ride-sharing and people are coaxed away from those other forms of transit. This dynamic became clear in the data a few years later: On average, ride-hailing trips generated far more traffic and 69% more carbon dioxide than the trips they displaced. We were proud of our contribution to ride-sharing but dismayed to see the results of a 2018 study that found that Uber Pool was so cheap it increased overall city travel: For every mile of personal driving it removed, it added 2.6 miles of people who otherwise would have taken another mode of transportation. As robotaxis are on the cusp of proliferating across the world, we are about to repeat the same mistake, but at a far greater scale.

Ah, the sweet taste of pride. Often followed by a not-so-graceful tumble, right? But seriously, why was MIT “proud of our contribution to ride-sharing”?

The correlation between an increase in subsidized cars and heightened traffic congestion seems like an obvious observation. Individuals from places as distant as Davis (a three-hour journey) were opting to “commute” to San Francisco, not for conventional employment but to essentially circle the city, nap in their vehicles, and contribute to the issue of public defecation while serving as “drivers” for those unwilling to walk. The city’s functionality came to a standstill because Uber failed to consider the overall capacity of people on the streets, which was significantly dwindling.

The issue at hand? Once more, it’s the age-old equation: more cars equals more headaches.

This is wisdom older than your grandma’s secret apple pie recipe. It’s not brain surgery; it’s not even “hey, we’re on the brink of utopia” snake-oil salesmanship as ancient as steam engines.

Taxis, or cars in general, really shine when it comes to serving folks who can’t sprint or comfortably hop onto a bus. But when you attempt to cram everyone into a car, you’re basically signing up for a masterclass in waste and inefficiency.

Hold onto your cabriolet hats, folks! The Uber saga was a spectacular display of wastefulness and inefficiency, a saga foreshadowed by centuries of shared transportation history.

  • Hacker? England 1625 from Hackney (although some claim French haquenee)
  • Cab (Cabriolet)? England 1820 from French cabrioler
  • Taxi? Germany 1895 from French taxe

This is not new stuff. Yet in the 2010s instead of taking a leisurely stroll for a few blocks, people decided to gather on bustling street corners, waiting to congest the roads in colossal, empty, self-centered metal boxes, leaving behind a trail of pollution and traffic snarls that could rival a spaghetti monster convention.

I think it’s pretty amazing that these people without scientific backgrounds — or really any education at all — think they have the right to decide the [transit systems]. And it blows my mind that they are getting away with it.

Picture this: an entire city street, maybe even a whole neighborhood, held hostage by the one guy who’s treating his latte frappuccino like it’s a cauldron of magic potion. Meanwhile, his Uber, just a stone’s throw away, has decided to throw a temper tantrum in the middle of the road, blocking all traffic while screaming at the top of its metallic lungs, “CHAD! Anyone, anywhere, please tell me if you’ve seen a CHAD! I’m here, waiting for CHAD!”

It’s like a scene from a sitcom where the latte-sipping wizard and the rogue Uber car team up to create a traffic jam worthy of a Shakespearean tragedy. “To honk or not to honk, that is the question!” No five-star ratings allowed by the thousands of people inconvenienced by the stupid “ride-hailing app” performances they have to witness.

Source: Twitter, 1 Sep 2022 “Пишут «Яндекс.Такси» хакнули и заставили многих водил ехать на один и тот же адрес в Москве – как итог образовалась огромнейшая пробка на Кутузовском проспекте”

Ah, the history lesson of the ages! People were grappling with this conundrum for centuries, especially when those old-timey hackney cabs were turning city streets into chaos. Then, like a dazzling revelation from the 1800s, we discovered that buses, streetcars, and subways were the real MVPs. We all have places to be, so you hop on board, take a seat, and zip your lip. It works wonders because we’re all in this together, not stuck in that backward world of cutthroat competition like Uber.

For god’s sake they even named the stupid company Uber. How could the disaster of callous capitalist nihilism be any less obvious?

The whole ride-share “explosion” felt like a group of overconfident, privileged guys who missed the memo on history, thinking they could throw billions at reinventing the wheel and somehow make it less, well, wheel-like. I was there, both inside and outside those companies, shaking my head in disbelief.

Around 2012, it seemed like I was a lone tech warrior resisting the awful siren call of Uber, amidst meetings with dozens of big-shot executives running billion-dollar empires. They were all in, betting our fortunes on Uber, while I stood my engineering ground armed with the simple wisdom of human behavior studies. I’d show up with train tickets and bus receipts, each sporting single-digit price tags, and they laughed at my “lesser” status while they casually tossed around hundreds and more for an Uber ride.

At one point I found myself in an exclusive invitation to Uber’s HQ, where they rolled out the red carpet and asked me to help lead their security team. I decided to go out of courtesy and a dash of curiosity. But as soon as they started pitching their grand plan — “We want to be as indispensable as water, so people can’t survive without paying us…” — I felt like I was in a bad sci-fi Dystopia (lesser Utopia) and made a hasty exit, practically diving for the door.

“Let them ride in cake,” did you say?!

Approximately ten years later, while strolling through the streets of San Francisco, one of the original developers of Uber made a startling confession to me. They admitted that their early product design was shockingly naive. In their detached-STEM enthusiasm, they believed that constantly harvesting sensitive personal data through various sensors (such as geolocation, motion, video, and microphones) during every car ride, essentially engaging in covert surveillance of customers without their knowledge, would somehow benefit society. However, they acknowledged that they eventually had to leave the project when they came to the stark realization that this approach was a grave ethical error: creating an app pretending to be essential for survival while relentlessly spying on users for profit.

So here we are, MIT advising us to study human behavior to predict, well… known human behavior. Makes perfect sense to me. What took them so long? I guess they had to stop believing.

Now, if you’re dealing with a disability, a robotaxi might make as much sense as a luxurious, oversized wheelchair or, heck, even an elevator (vertical subway car) instead of stairs. But for the rest of us, it’s time to put on those walking shoes, head to the train, and stop being the roadblock to progress!

A train to a remote office in Connecticut was no joke, for example, a logistical maze in America’s shameless suburban planning system. I arrived early using a $18 train from NYC and finding a local bus that cost me $3 for the last mile, walking a sidewalk from the stop to the door. The other executive pulled up behind, sweaty, late and barking “no problem, it was only $200 for my Uber but we got some bad traffic”. He should have been fired on the spot.

Despite decades of working in Silicon Valley, working inside the largest and most successful tech companies, to this day I’ve taken only one Uber ride. As soon I as I stepped in it, I knew it was wrong. The Fabians probably would have just said: “no shit Sherlock”.