Neuralink “Exploration” Based on Apartheid Torture of Captives to Death

Wouter Basson, known as “Doctor Death”, led the Apartheid government clandestine chemical and biological warfare program to capture and assassinate people who had anti-apartheid thoughts: Project Coast. He did not apologize, did not show any remorse and after 13 years of fighting in court was found guilty of unethical conduct.

First, let’s just get out of the way that white South African children exposed to horribly racist Apartheid lies were raised to believe Black people should never be allowed to keep private thoughts.

Inside South Africa, riots, boycotts, and protests by black South Africans against white rule had occurred since the inception of independent white rule in 1910. Opposition intensified when the Nationalist Party, assuming power in 1948, effectively blocked all legal and non-violent means of political protest by non-whites. The African National Congress (ANC) and its offshoot, the Pan Africanist Congress (PAC), both of which envisioned a vastly different form of government based on majority rule, were outlawed in 1960 and many of its leaders imprisoned. The most famous prisoner was a leader of the ANC, Nelson Mandela, who had become a symbol of the anti-Apartheid struggle.

Indeed, it was Nelson Mandela’s private thoughts while in prison (as well as his sophisticated use of secret distributed encryption technology) that have been credited with winning the war against Apartheid.

Getting a link to someone’s thoughts used to be referred to as detention and torture, or as many Americans know from their own history of denying Blacks privacy since the 1770s, cynically referred to as rubber hose cryptanalysis.

Plain and simple.

Now add a third cell to that cartoon that describes Neuralink as “surgically install a device that discloses his most private thoughts.”

Steven Biko is the canonical example, because he might’ve survived if he had better privacy technology to protect his data (his movement was intercepted so he could be taken captive by police and tortured to death in prison).

Stephen Bantu Biko was an anti-apartheid activist in South Africa in the 1960s and 1970s who founded the Black Consciousness Movement, empowering and mobilizing the urban black population to fight for human rights.

With the rise of better privacy technology, the brain remains a crucial aspect of safety. Things you know, such as a password, are thoughts that are supposedly beyond the reach of anyone or anything if you refuse to disclose them.

Remember, Nelson Mandela took on and defeated the entire South African government by keeping his thoughts secret yet shared extremely selectively.

With that in mind…

Second, let’s take a look at the latest news about a white South African man who grew up in a family actively promoting and directly benefiting from Apartheid.

[Musk’s grandfather was] leader in a fringe political movement that called itself Technocracy Incorporated, which advocated an end to democracy and rule by a small tech-savvy elite. During World War II, the Canadian government banned the group, declaring it a risk to national security. Haldeman’s involvement with Technocracy continued, though, and he was arrested and convicted of three charges relating to it. Once he got to South Africa, he added Black Africans to his list of rhetorical targets.

An avowed white nationalist, aligned with Hitler, gives important context to Elon Musk’s childhood. The old man pushing to spread Apartheid with “technocracy” had a grandson who is the one now allegedly torturing captive animals to death in an ill-minded attempt to use technology for removing physical privacy of thoughts.

Public records reviewed by WIRED, and interviews conducted with a former Neuralink employee and a current researcher at the University of California, Davis primate center, paint a wholly different picture of Neuralink’s animal research. The documents include veterinary records, first made public last year, that contain gruesome portrayals of suffering reportedly endured by as many as a dozen of Neuralink’s primate subjects, all of whom needed to be euthanized.

Reading this stuff is truly awful, reminiscent of Nazi experiments, demonstrating again the inhumane and cruel lies that Elon Musk often runs with.

Additional veterinary reports show the condition of a female monkey called “Animal 15” during the months leading up to her death in March 2019. Days after her implant surgery, she began to press her head against the floor for no apparent reason; a symptom of pain or infection, the records say. Staff observed that though she was uncomfortable, picking and pulling at her implant until it bled, she would often lie at the foot of her cage and spend time holding hands with her roommate.

Animal 15 began to lose coordination, and staff observed that she would shake uncontrollably when she saw lab workers. Her condition deteriorated for months until the staff finally euthanized her. A necropsy report indicates that she had bleeding in her brain and that the Neuralink implants left parts of her cerebral cortex “focally tattered.”

This was a healthy monkey before Elon Musk had her tortured to death for selfish reasons.

Shown a copy of Musk’s remarks about Neuralink’s animal subjects being “close to death already,” a former Neuralink employee alleges to WIRED that the claim is “ridiculous,” if not a “straight fabrication.” “We had these monkeys for a year or so before any surgery was performed,” they say. The ex-employee, who requested anonymity for fear of retaliation, says that up to a year’s worth of behavioral training was necessary for the program, a time frame that would exempt subjects already close to death.

Think of it this way. South Africa’s secret police didn’t bother to detain and torture any Black people “close to death already” and neither would Elon Musk or the fools who decide to work for his evil intentions.

He plays with fire, everyone else gets burned.

The labs sought healthy subjects for a specific reason that makes perfect sense, to try and keep captive subjects alive while forcing them to disclose all their secrets. This sounds like nothing new to veterans of the CIA, that capturing someone terminally ill to extract information isn’t worth a bother.

Musk propaganda is literally the exact inversion of truth, as if proving the South African methods of keeping Apartheid alive aren’t dead yet. Rockets blow up on launch as intended. Cars keep killing more and more people. Captive patients die as expected. Biko shot himself with a rifle in the back of the head while his hands were tied behind his back. Whatever is the absolute worst outcome is described in a way that, no matter how far from truth, all liability goes up in a puff of white smoke.

Overuse of a notorious X (white-supremacist “good luck” symbol) is meant to disclose someone who intentionally disrespects established protocols by acting as an amorphous, unaccountable, unwilling and lawless subject who never accepts any responsibility for anything.

He selected healthy subjects for the purposes of trying to test with dangerous implants to measure the effects on their health. He obviously needs to prove his toys meant to destroy privacy aren’t a source of harm, or people will reject them for causing terminal illness (which they in fact are doing).

Common sense test: if you only choose terminally ill patients for a test of new technology, how would anyone ever suitably prove that technology wasn’t the cause of their immense suffering and early death?

Instead of the right thing — rising up to the hard work of proving no harm — he has started the usual gaslighting claims that any and all harm should be expected, even when very obviously and totally unexpected.

To assess whether the technology is causing harm or not, researchers typically follow established protocols, which is the antithesis to Elon Musk’s constant demands that nobody ever follow established protocols (because they would quickly expose his fraud).

In all, the company has killed about 1,500 animals, including more than 280 sheep, pigs and monkeys, following experiments since 2018, according to records reviewed by Reuters and sources with direct knowledge of the company’s animal-testing operations. The sources characterized that figure as a rough estimate because the company does not keep precise records on the number of animals tested and killed.

They say their company doesn’t keep records on the number killed. If you don’t keep records, you are peddling ignorance and not science.

Musk told employees he wanted the monkeys at his San Francisco Bay Area operation to live in a “monkey Taj Mahal”

Presumably Musk thinks being a soulless monster is amusing when he says he wants his test subjects to live in a mausoleum. Does it make any more sense if he says he wants his monkeys to live in a casket six feet underground? Maybe he doesn’t know what the Taj Mahal is. Either way…

What the sources really mean, since there is literally no way to perform research in an unexplored field like this without keeping detailed records, is that they keep everything secret to avoid accountability (just like during Apartheid).

In related news: Elon Musk demanded cameras be installed for surveillance of people buying his products WITHOUT THEIR KNOWLEDGE AND AGAINST THEIR INTERESTS.

At one meeting, he suggested using data collected from the car’s cameras—one of which is inside the car and focused on the driver…. One of the women at the table pushed back. “We went back and forth with the privacy team about that,” she said. … Musk was not happy. … “I am the decision-maker at this company, not the privacy team,” he said. “I don’t even know who they are.”

“The decider” brags how nobody else matters and that he doesn’t know/care who experts are anyway. He’ll make the dumbest decision possible and classify it genius. You’re the enemy if you disagree. Of course he doesn’t care what truth is, he’s making it up like a tin-pot wannabe dictator in Africa (e.g. this apple didn’t fall far from its horribly racist family tree).

In other related news: police refused to charge Elon Musk for crime even though he posted video of himself in his Tesla clearly breaking the law. Historians may recognize this as similar to when the South African Apartheid state set up parallel and unequal information access and record-keeping regimes to create secrecy and lack of accountability only for… white supremacists.

India High Court Bans Generative AI Use Harmful to Celebrity

Not the real Elvis, but then again Elvis was just a white guy stealing from Black musicians as an obvious impostor (Big Mama Thornton, Lloyd Price, Chuck Berry, Lavern Baker, Ray Charles, Roy Hamilton, Arthur Crudup, Junior Parker, Fats Domino, Arthur Gunter… just to name a few of his victims). Elvis now could be argued to have been a criminal given an India High Court ruling.

According to reports, Anil Kapoor, a highly renowned figure, is seeking protection against the unauthorized use of his name, image, and voice for commercial purposes. He wishes to prevent his public image from being depicted in a manner that he considers negative (and denies him control, including profit rights).

…Anil Kapoor is one of the most celebrated and acclaimed successful actors in the industry who has appeared in over 100 films, television shows, and web series. He said Kapoor has also endorsed a large variety of products and services and has appeared in several advertisements as well.

Sounds pretty famous.

Nothing says you have achieved fame like selling out endorsing a large variety of products and services. If you presented to me an AI-generated Anil Kapoor image on a shampoo bottle with some kind of changes (e.g. skin darkened) alongside a non-AI version, I’d be hard-pressed to distinguish authenticity between the two.

But my own ignorance about who this man wants to be seen as is irrelevant to assigning rights for his data, just like when someone says they can’t tell the difference between Elvis and the dozens of Black musicians he stole from. It actually matters that those Black musicians lost their audiences and their income when some young white boy used the latest technology to steal others’ data and give them no control or credit.

This court case of Kapoor centers around the fact that he should decide how he appears, control how others are allowed to portray him, and keep more money for himself if his persona is being used for profit. That doesn’t have much to do with celebrity in my mind, except that it’s easy to say the person is the product. Really it’s a more universal issue that everyone should control their own data, regardless of whether they can sell out endorse a large variety of products and services. But I suppose I have to admit his product endorsements means he has the kind of perceptible loss in revenue that means he’ll stand on behalf of us all.

He apparently even says a Mumbai word for “excellent” in such a way that needs protection from AI, because how he says it always links back to him.

…Kapoor’s counsel Anand submitted that the expression “jhakaas”, a Marathi slang, was popularised by the actor in Hindi films and as per press reports how he expresses the word is exclusively used by him. Anand claimed that Kapoor popularised this term in the 1980s with his unique style and delivery in various films and public appearances. “What’s interesting is this is not jhakaas alone, it’s the way he says it with a twisted lip,” Anand added to which Justice Singh said this is what the HC has to protect and not the word itself.

A twisted lip is interesting? Isn’t the definition of a signature move that its uniqueness means every use is a provable reference, therefore easily protected? I wouldn’t call that interesting, given other similar examples.

Spiky peroxide-blonde-haired Billy Idol repeats the word “masturbatory” three times with a sneer

Sneering while saying the word masturbatory. Clearly nobody but Billy Idol should be allowed to do this.

But seriously, I get the concept of this case is “informational self-determination” (sounds far better in German: informationelle Selbstbestimmung) and so I’m following eagerly along because this is about the future of the Web.

Yet also something doesn’t quite seem right in India.

What actually becomes interesting is a High Court in a country negatively portrays people while arguing that negative portrayals are real harms of huge consequence.

The judge engaged in very obviously racist language in an attempt to explain rights of a celebrity and the damage to his reputation from unwanted negative portrayals.

Justice Singh said while there can be no doubt that free speech about a well-known person is protected in the form of write-ups, parody, satires, criticism etc, which is genuine, but when the same crosses the line and results in tarnishment, blackening or jeopardising the individual’s personality and elements associated with the individual, it is illegal.

Blackening? Excuse me, Justice Singh?

Kapoor must be glad he isn’t black, as the court says blackening him crosses a line into real harm.

“The technological tools that are now available make it possible for any unauthorised user to make use of celebrities’ persona, by using such tools including Artificial Intelligence. The celebrity also enjoys the right of privacy and does not wish that his or her image, or voice is portrayed in a dark manner as is being done on porn websites,” the court added.

Portrayed in a dark manner? Come on.

Are we seriously supposed to believe being portrayed in a dark manner means crime has been committed? Isn’t dark something good? I hear that India can’t get enough dark chocolate lately, for example, claiming somehow all kinds of innovation and health benefits over the awful bad stuff of light chocolate:

High in Antioxidants
May Lower Blood Pressure
Improves Heart Health
Boosts Brain Function
May Lower Cholesterol
Helps Control Blood Sugar
Reduces Stress

They left out “makes justice system less racist”.

I am not in any way endorsing any products here, definitely not saying you should taste the supreme benefits of Royce India dark chocolate. To start with, I claim absolutely no celebrity…

Anyway, you can see in the court statement above the big AI money quote along-side all the racism, in case you were wondering how this case differed from decades of conflicts over pictures or videos.

Racism bubbles up hot and steamy throughout this court’s narrative about protecting a rich and powerful celebrity from any negative depictions.

The Court can’t turn a blind to such misuse of personality’s name and other elements and dilution, tarnishment are all actionable torts which Kapoor would have to be protected against, Justice Singh said.

Don’t turn a blind eye to tarnishment, they say, a word that means… wait a minute… have to look this one up in a dictionary just to be sure… to become darker.

WOT. Again?!

Is there any possible way in India for a High Court to say someone is harmed other than referring to dark as harm and inherently bad?

…the aspiration for white skin can be more directly traced to colonialism much in the way that racism originates with slavery and colonialism. It is with the arrival of the British colonialists that we see specific codified color lines. Unlike previous waves of incursions, the British, with their distinct whiteness, specifically emphasized the separation between themselves and the Indians. A large body of historical and socio-cultural literature has documented the British emphasis on whiteness as a form of racial superiority and their justification of colonization…

Actionable torts, indeed.

For me the case really, seriously begs the deeper question of whether racial discrimination is a tort.

Can a court be repeatedly emphasizing dark is bad and light is good, making obvious negative depictions of huge swaths of society while they claim to be protecting society against unwanted negative depictions?

I mean if someone used generative AI to actually darken Kapoor’s skin as a test case for this court, it seems by their words he would need to be protected from this. No? Self-own. On that same point, a court’s repeated portrayal of darker things as lesser or worse, means they are repeatedly engaging in the very thing they claim is so awful that it must be stopped immediately.

Did Wiz Just Burn Their Mole by Reporting Microsoft’s AI Leak?

Numerous inquiries have been directed to me regarding a recent Microsoft incident related to AI data loss, with several individuals seeking my expert commentary.

At this juncture, it is prudent to exercise caution in offering definitive statements, as the situation remains in its early stages. Nevertheless, certain peculiarities have come to light concerning the actions of a company bearing the somewhat uncomfortably comical name of “Wiz” that claims to be obsessed with leaks.

To provide context, it is imperative to bring into the open that Wiz has endured a less-than-stellar reputation within the security product community, with a tarnished standing alluded to in hushed tones among industry veterans. Allegations of aggressive and unethical practices, ostensibly prioritizing unsustainable growth at the expense of others, have rendered Wiz akin to a boisterous interloper within the realm of composed security professionals who prefer to protect and serve quietly.

The ongoing legal entanglement between Wiz and Orca serves as a glaring manifestation of the broader industry concerns, as reported in CSO Online.

Israeli cybersecurity startup Orca Security is suing local cloud security rival Wiz for patent infringement, alleging that its success and growth is built on “wholesale copying.” Orca has accused Wiz of taking its “revolutionary inventions” and creating a “copycat cloud security platform,” improperly trading off of Orca’s creations, according to a lawsuit filed in the US District Court, District of Delaware. “This copying is replete throughout Wiz’s business and has manifest in myriad ways,” it added.

Orca was founded in 2019 by Israeli-born cybersecurity technologist Avi Shua. In the four years since its founding, Orca has raised substantial investment funds and grown from fewer than a dozen to more than 400 employees today. In 2022, Orca was the recipient of Amazon Web Services Global Security Partner of the Year Award. Wiz was founded in January 2020, a year after Orca, by Assaf Rappaport, Ami Luttwak, Yinon Costica, and Roy Reznikthat, a team that previously led the Cloud Security Group at Microsoft.

Orca is clearly pissed at Wiz (pardon the pun). Also Orca seems much more openly defensive than others who suspect foul play.

Going beyond surface-level scrutiny, further analysis unveils Wiz’s founders served together as intelligence operatives in Israel before being employed by Microsoft security. This introduces three significant dimensions to the unfolding narrative.

Firstly, it is worth acknowledging that Wiz founders have a competitive mindset honed in the crucible of combat intelligence activities. Their background as government-trained special operatives from a nation marked by profound existential concerns lends a distinctive perspective that may not readily align with conventional notions of legal compliance, fairness, or the civility inherent to civilian products.

In this sense anyone maintaining some air of formality with humility could serve as an important countermeasure against extrajudicial transgressions, as advocated by seasoned espionage veterans. Regrettably, Wiz’s public image constantly appears to reflect the diametrically opposed disposition of ostentatious self-promotion:

After you leave [Israel’s secret military technology unit of Special Operations — Unit 81] you realize that up until now you did the wildest things in your life together with the most talented people and you want to extend that into civilian life.

Secondly, legal experts have highlighted a recourse against lawyers who exhibit unethical behavior is at least feasible through professional reporting mechanisms. Observant lawyers can step forward when rules of conduct are violated, thereby enabling courts to impose sanctions. While this is by no means perfect, it begs the question: What other avenues are available to security professionals, beyond resorting to protracted legal battles — an undeniably formidable hurdle? FTC are you reading this?

Lastly, it is imperative to note the intersection of Wiz’s history as staff in Microsoft security, a facet that adds further complexity to the unfolding narrative about Wiz targeting Microsoft security. In a professional context, it is not my assertion that Wiz has informants within Microsoft who provide insider tips on where to poke for soft or bad configurations. No. Rather, I am raising a question regarding the possibility of such an occurrence.

As part of the Wiz Research Team’s ongoing work on accidental exposure of cloud-hosted data, the team scanned the internet for misconfigured storage containers. In this process, we found a GitHub repository under the Microsoft organization named robust-models-transfer. The repository belongs to Microsoft’s AI research division… an attacker could have injected malicious code into all the AI models in this storage account, and every user who trusts Microsoft’s GitHub repository would’ve been infected by it.

However, it’s important to note this storage account wasn’t directly exposed to the public; in fact, it was a private storage account. The Microsoft developers used an Azure mechanism called “SAS tokens”, which allows you to create a shareable link granting access to an Azure Storage account’s data — while upon inspection, the storage account would still seem completely private.

The degree of privacy associated with this storage account is a noteworthy aspect, buried by Wiz far below headlines. Their report starts out saying “ongoing work on accidental exposure” as if in the public, and then quietly pivots to the fact that the accident was extremely private. It prompts one to contemplate whether an initial investigator of a secret accident may, in fact, be on the scene for a special and unique private reason.

Let me also be clear about Wiz research posting a fair critique of Microsoft’s overarching security model. The report is actually on target even though “outside the box” (מבצע מחוץ לקופסה), like an air force jet that somehow miraculously flies straight through extensive defenses and hits an exact target inside the box.

As the old joke goes, Iran tries to rattle Israel by revealing 2 million soldiers ready to fight. Israel backs them down saying “we already knew, and we’re not ready to feed 2 million prisoners”.

The Wiz recommendation to establish dedicated storage accounts for external sharing, limiting the potential impact of accidentally over-privileged hidden long-term tokens to external data, is very astute and prudent, especially within the realm of trustworthy AI.

…we recommend creating dedicated storage accounts for external sharing, to ensure that the potential impact of an over-privileged token is limited to external data only.

Amen to that! This is something Microsoft definitely needs to hear, and their customers must demand more. A lack of segmentation coupled with a lack of granular token revocation on an opaque centralized system is a dangerous mix. There are much better ways.

Wiz correctly brings focus to data storage methods for AI, which should be personally dedicated instead of easily over-shared. A proper multi-user distributed model for AI data controls greatly reduces the looming danger of big privacy and integrity breaches. I agree with their analysis here.

And then, unfortunately…

While I concur with many technical details of the Wiz blog, I also discern a certain amount of unwarranted self-serving sensationalism. It is not my intention here to downplay intellectual acumen of the Wiz team. They are ex-intelligence, after all, trained by some of the best in the world to be the best at exploits and communication.

Their discussion could have stayed sharp and relevant to emerging data platform safety, touching on AI as a direct beneficiary of more distributed safety models. I’m definitely not stepping in the way if that was where they stayed, especially the “lack of monitoring capabilities” on centralized proprietary data stores raising a problem of secret “privileged tokens as a backdoor”.

Yet instead I observed their extensive blog post wandering into amateurishly immodest places, pressing readers hard (especially Microsoft customers) to jump into Wiz’s sales funnel.

Clumsy. Lazy. Arrogant.

A comparison of Wiz’s blog tone and content with Microsoft’s official remarks on the exact same topic highlights some very uncomfortable disparities.

No customer data was exposed, and no other internal services were put at risk because of this issue. No customer action is required in response to this issue. […] Our investigation concluded that there was no risk to customers as a result of this exposure.

Microsoft admits a bad token event and then unequivocally asserts no customer data was exposed, no other internal services jeopardized. They further state that no customer action is required. Their conclusion is no risk to customers from the configuration error.

These key statements (pardon the pun) are conspicuously absent from Wiz’s hyperventilated extra analysis, which seems designed to draw attention for revenue.

Microsoft’s narrative keeps risk in perspective as it discusses why configuration errors occur and emphasizes this issue was promptly addressed upon discovery. They suggest the issue be used as an educational opportunity, sharing internal/private discussions to facilitate learning. Again maybe it is pertinent to note it was ex-employees of Microsoft security who established Wiz to directly compete with Microsoft security.

GitHub’s secret scanning service monitors all public open-source code changes for plaintext exposure of credentials and other secrets. This service runs a SAS detection, provided by Microsoft, that flags Azure Storage SAS URLs pointing to sensitive content, such as VHDs and private cryptographic keys. Microsoft has expanded this detection to include any SAS token that may have overly-permissive expirations or privileges.

The slight extension of Microsoft’s scanning is thus the actual critical development in this context. And I don’t see that detail mentioned at all by Wiz, as they try to sell their own scanner.

Wiz instead appears to emphasize most how they alone came into possession of Microsoft’s private and sensitive personal data, including passwords, secret keys, and internal Microsoft Teams messages, insinuating a very deep level of safety beach.

SO SCARY. Buy! The Wiz blog literally says “we recommend” and then… their product.

According to sources, Wiz purportedly received its moniker from former Israeli military intelligence personnel who regarded themselves as omnipotent “Wizards.” This appellation draws inspiration from the popular story “Wizard of Oz” that underscores the vulnerability of Americans, who can be easily swayed by fear into funneling their finances and backing into deceptive technologies and detrimental sources of authority. It highlights a mastery of crafty maneuvers and superficial victories over genuine substance.

To clarify, I am not alleging the existence of an informant as required to make the Wiz scanning service somehow more intelligent, effective and targeted than Microsoft’s own competing services. I am merely raising questions regarding the actions of former Microsoft security employees who have entered into direct competition with their previous employer and are often tangled with Microsoft security seemingly from within while being outside.

The allegations of the Orca lawsuit as well as murmurs within the security industry about Wiz’s unsavory business practices do bring to mind potential further investigations of how Wiz really “researches” flaws.

At the end of the day a CISO comparing the Microsoft and Wiz versions of the same story should be wondering whether they should risk engaging with offensively oriented braggadocios, given there also are many more compelling professionals (quiet, modest and effective) who are available and trustworthy.


On Robots Killing People

The robot revolution began long ago, and so did the killing. One day in 1979, a robot at a Ford Motor Company casting plant malfunctioned—human workers determined that it was not going fast enough. And so twenty-five-year-old Robert Williams was asked to climb into a storage rack to help move things along. The one-ton robot continued to work silently, smashing into Williams’s head and instantly killing him. This was reportedly the first incident in which a robot killed a human; many more would follow.

At Kawasaki Heavy Industries in 1981, Kenji Urada died in similar circumstances. A malfunctioning robot he went to inspect killed him when he obstructed its path, according to Gabriel Hallevy in his 2013 book, When Robots Kill: Artificial Intelligence Under Criminal Law. As Hallevy puts it, the robot simply determined that “the most efficient way to eliminate the threat was to push the worker into an adjacent machine.” From 1992 to 2017, workplace robots were responsible for 41 recorded deaths in the United States—and that’s likely an underestimate, especially when you consider knock-on effects from automation, such as job loss. A robotic anti-aircraft cannon killed nine South African soldiers in 2007 when a possible software failure led the machine to swing itself wildly and fire dozens of lethal rounds in less than a second. In a 2018 trial, a medical robot was implicated in killing Stephen Pettitt during a routine operation that had occurred a few years earlier.

You get the picture. Robots—”intelligent” and not—have been killing people for decades. And the development of more advanced artificial intelligence has only increased the potential for machines to cause harm. Self-driving cars are already on American streets, and robotic "dogs" are being used by law enforcement. Computerized systems are being given the capabilities to use tools, allowing them to directly affect the physical world. Why worry about the theoretical emergence of an all-powerful, superintelligent program when more immediate problems are at our doorstep? Regulation must push companies toward safe innovation and innovation in safety. We are not there yet.

Historically, major disasters have needed to occur to spur regulation—the types of disasters we would ideally foresee and avoid in today’s AI paradigm. The 1905 Grover Shoe Factory disaster led to regulations governing the safe operation of steam boilers. At the time, companies claimed that large steam-automation machines were too complex to rush safety regulations. This, of course, led to overlooked safety flaws and escalating disasters. It wasn’t until the American Society of Mechanical Engineers demanded risk analysis and transparency that dangers from these huge tanks of boiling water, once considered mystifying, were made easily understandable. The 1911 Triangle Shirtwaist Factory fire led to regulations on sprinkler systems and emergency exits. And the preventable 1912 sinking of the Titanic resulted in new regulations on lifeboats, safety audits, and on-ship radios.

Perhaps the best analogy is the evolution of the Federal Aviation Administration. Fatalities in the first decades of aviation forced regulation, which required new developments in both law and technology. Starting with the Air Commerce Act of 1926, Congress recognized that the integration of aerospace tech into people’s lives and our economy demanded the highest scrutiny. Today, every airline crash is closely examined, motivating new technologies and procedures.

Any regulation of industrial robots stems from existing industrial regulation, which has been evolving for many decades. The Occupational Safety and Health Act of 1970 established safety standards for machinery, and the Robotic Industries Association, now merged into the Association for Advancing Automation, has been instrumental in developing and updating specific robot-safety standards since its founding in 1974. Those standards, with obscure names such as R15.06 and ISO 10218, emphasize inherent safe design, protective measures, and rigorous risk assessments for industrial robots.

But as technology continues to change, the government needs to more clearly regulate how and when robots can be used in society. Laws need to clarify who is responsible, and what the legal consequences are, when a robot’s actions result in harm. Yes, accidents happen. But the lessons of aviation and workplace safety demonstrate that accidents are preventable when they are openly discussed and subjected to proper expert scrutiny.

AI and robotics companies don’t want this to happen. OpenAI, for example, has reportedly fought to “water down” safety regulations and reduce AI-quality requirements. According to an article in Time, it lobbied European Union officials against classifying models like ChatGPT as “high risk” which would have brought “stringent legal requirements including transparency, traceability, and human oversight.” The reasoning was supposedly that OpenAI did not intend to put its products to high-risk use—a logical twist akin to the Titanic owners lobbying that the ship should not be inspected for lifeboats on the principle that it was a “general purpose” vessel that also could sail in warm waters where there were no icebergs and people could float for days. (OpenAI did not comment when asked about its stance on regulation; previously, it has said that “achieving our mission requires that we work to mitigate both current and longer-term risks,” and that it is working toward that goal by “collaborating with policymakers, researchers and users.”)

Large corporations have a tendency to develop computer technologies to self-servingly shift the burdens of their own shortcomings onto society at large, or to claim that safety regulations protecting society impose an unjust cost on corporations themselves, or that security baselines stifle innovation. We’ve heard it all before, and we should be extremely skeptical of such claims. Today’s AI-related robot deaths are no different from the robot accidents of the past. Those industrial robots malfunctioned, and human operators trying to assist were killed in unexpected ways. Since the first-known death resulting from the feature in January 2016, Tesla’s Autopilot has been implicated in more than 40 deaths according to official report estimates. Malfunctioning Teslas on Autopilot have deviated from their advertised capabilities by misreading road markings, suddenly veering into other cars or trees, crashing into well-marked service vehicles, or ignoring red lights, stop signs, and crosswalks. We’re concerned that AI-controlled robots already are moving beyond accidental killing in the name of efficiency and “deciding” to kill someone in order to achieve opaque and remotely controlled objectives.

As we move into a future where robots are becoming integral to our lives, we can’t forget that safety is a crucial part of innovation. True technological progress comes from applying comprehensive safety standards across technologies, even in the realm of the most futuristic and captivating robotic visions. By learning lessons from past fatalities, we can enhance safety protocols, rectify design flaws, and prevent further unnecessary loss of life.

For example, the UK government already sets out statements that safety matters. Lawmakers must reach further back in history to become more future-focused on what we must demand right now: modeling threats, calculating potential scenarios, enabling technical blueprints, and ensuring responsible engineering for building within parameters that protect society at large. Decades of experience have given us the empirical evidence to guide our actions toward a safer future with robots. Now we need the political will to regulate.

This essay was written with Bruce Schneier, and previously appeared on Atlantic.com.