Update: “Motor Mouth Arabia” has copied this post, republishing it as their own through AI generated copy edits. Also copied here, whatever ZBR is supposed to mean.
Ontario, Canada: September 22. This seems like yet another case of Tesla software failing to see children and stop signs.
Halton police pulled over a Tesla that allegedly “passed a school bus that was stopped, had its lights activated and children were loading onto the bus”
A 30 year old woman ignored a stopped school bus loading children, because she bought a Tesla. Of course the first thing that comes to mind is this recent news:
The National Highway Traffic Safety Administration (NHTSA) on Friday said it will probe the March 15 crash in North Carolina that injured a 17-year-old charter school student. The State Highway Patrol said the driver of the 2022 Tesla Model Y, a 51-year-old man, didn’t stop for the bus, which was displaying all of its activated warning signs.
March? Let’s also talk about the 16th of May, when a Tesla plowed into a school bus in New York.
Tesla ignores school buses and puts children at risk of death? Yes, it’s a long known problem that shows no signs of being addressed.
We’ve done tests over past years. “For a school bus with kids getting off, we showed that the Tesla would drive right past, completely ignoring the “school zone” sign, and keeping on driving at 40 miles per hour [25 miles per hour over the posted limit, a measly $500 maximum fine].
The latest research says driverless cars are ten times worse than human drivers (Waymo/Cruise crashing every 60,000 miles, whereas human drivers average 600,000 miles).
The latest NHTSA data paints an even worse picture for Tesla, revealing a jump to 30 fatalities (12 more since January 2023). The average now seems to be one death reported by Tesla in every ten of their “Autopilot” crashes.
First, let’s just get out of the way that white South African children exposed to horribly racist Apartheid lies were raised to believe Black people should never be allowed to keep private thoughts.
Inside South Africa, riots, boycotts, and protests by black South Africans against white rule had occurred since the inception of independent white rule in 1910. Opposition intensified when the Nationalist Party, assuming power in 1948, effectively blocked all legal and non-violent means of political protest by non-whites. The African National Congress (ANC) and its offshoot, the Pan Africanist Congress (PAC), both of which envisioned a vastly different form of government based on majority rule, were outlawed in 1960 and many of its leaders imprisoned. The most famous prisoner was a leader of the ANC, Nelson Mandela, who had become a symbol of the anti-Apartheid struggle.
Indeed, it was Nelson Mandela’s private thoughts while in prison (as well as his sophisticated use of secret distributed encryption technology) that have been credited with winning the war against Apartheid.
Getting a link to someone’s thoughts used to be referred to as detention and torture, or as many Americans know from their own history of denying Blacks privacy since the 1770s, cynically referred to as rubber hose cryptanalysis.
With the rise of better privacy technology, the brain remains a crucial aspect of safety. Things you know, such as a password, are thoughts that are supposedly beyond the reach of anyone or anything if you refuse to disclose them.
Remember, Nelson Mandela took on and defeated the entire South African government by keeping his thoughts secret yet shared extremely selectively.
[Musk’s grandfather was] leader in a fringe political movement that called itself Technocracy Incorporated, which advocated an end to democracy and rule by a small tech-savvy elite. During World War II, the Canadian government banned the group, declaring it a risk to national security. Haldeman’s involvement with Technocracy continued, though, and he was arrested and convicted of three charges relating to it. Once he got to South Africa, he added Black Africans to his list of rhetorical targets.
An avowed white nationalist, aligned with Hitler, gives important context to Elon Musk’s childhood. The old man pushing to spread Apartheid with “technocracy” had a grandson who is the one now allegedly torturing captive animals to death in an ill-minded attempt to use technology for removing physical privacy of thoughts.
Public records reviewed by WIRED, and interviews conducted with a former Neuralink employee and a current researcher at the University of California, Davis primate center, paint a wholly different picture of Neuralink’s animal research. The documents include veterinary records, first made public last year, that contain gruesome portrayals of suffering reportedly endured by as many as a dozen of Neuralink’s primate subjects, all of whom needed to be euthanized.
Reading this stuff is truly awful, reminiscent of Nazi experiments, demonstrating again the inhumane and cruel lies that Elon Musk often runs with.
Additional veterinary reports show the condition of a female monkey called “Animal 15” during the months leading up to her death in March 2019. Days after her implant surgery, she began to press her head against the floor for no apparent reason; a symptom of pain or infection, the records say. Staff observed that though she was uncomfortable, picking and pulling at her implant until it bled, she would often lie at the foot of her cage and spend time holding hands with her roommate.
Animal 15 began to lose coordination, and staff observed that she would shake uncontrollably when she saw lab workers. Her condition deteriorated for months until the staff finally euthanized her. A necropsy report indicates that she had bleeding in her brain and that the Neuralink implants left parts of her cerebral cortex “focally tattered.”
Shown a copy of Musk’s remarks about Neuralink’s animal subjects being “close to death already,” a former Neuralink employee alleges to WIRED that the claim is “ridiculous,” if not a “straight fabrication.” “We had these monkeys for a year or so before any surgery was performed,” they say. The ex-employee, who requested anonymity for fear of retaliation, says that up to a year’s worth of behavioral training was necessary for the program, a time frame that would exempt subjects already close to death.
Think of it this way. South Africa’s secret police didn’t bother to detain and torture any Black people “close to death already” and neither would Elon Musk or the fools who decide to work for his evil intentions.
He plays with fire, everyone else gets burned.
The labs sought healthy subjects for a specific reason that makes perfect sense, to try and keep captive subjects alive while forcing them to disclose all their secrets. This sounds like nothing new to veterans of the CIA, that capturing someone terminally ill to extract information isn’t worth a bother.
Musk propaganda is literally the exact inversion of truth, as if proving the South African methods of keeping Apartheid alive aren’t dead yet. Rockets blow up on launch as intended. Cars keep killing more and more people. Captive patients die as expected. Biko shot himself with a rifle in the back of the head while his hands were tied behind his back. Whatever is the absolute worst outcome is described in a way that, no matter how far from truth, all liability goes up in a puff of white smoke.
He selected healthy subjects for the purposes of trying to test with dangerous implants to measure the effects on their health. He obviously needs to prove his toys meant to destroy privacy aren’t a source of harm, or people will reject them for causing terminal illness (which they in fact are doing).
Common sense test: if you only choose terminally ill patients for a test of new technology, how would anyone ever suitably prove that technology wasn’t the cause of their immense suffering and early death?
Instead of the right thing — rising up to the hard work of proving no harm — he has started the usual gaslighting claims that any and all harm should be expected, even when very obviously and totally unexpected.
To assess whether the technology is causing harm or not, researchers typically follow established protocols, which is the antithesis to Elon Musk’s constant demands that nobody ever follow established protocols (because they would quickly expose his fraud).
In all, the company has killed about 1,500 animals, including more than 280 sheep, pigs and monkeys, following experiments since 2018, according to records reviewed by Reuters and sources with direct knowledge of the company’s animal-testing operations. The sources characterized that figure as a rough estimate because the company does not keep precise records on the number of animals tested and killed.
They say their company doesn’t keep records on the number killed. If you don’t keep records, you are peddling ignorance and not science.
Musk told employees he wanted the monkeys at his San Francisco Bay Area operation to live in a “monkey Taj Mahal”
Presumably Musk thinks being a soulless monster is amusing when he says he wants his test subjects to live in a mausoleum. Does it make any more sense if he says he wants his monkeys to live in a casket six feet underground? Maybe he doesn’t know what the Taj Mahal is. Either way…
What the sources really mean, since there is literally no way to perform research in an unexplored field like this without keeping detailed records, is that they keep everything secret to avoid accountability (just like during Apartheid).
At one meeting, he suggested using data collected from the car’s cameras—one of which is inside the car and focused on the driver…. One of the women at the table pushed back. “We went back and forth with the privacy team about that,” she said. … Musk was not happy. … “I am the decision-maker at this company, not the privacy team,” he said. “I don’t even know who they are.”
“The decider” brags how nobody else matters and that he doesn’t know/care who experts are anyway. He’ll make the dumbest decision possible and classify it genius. You’re the enemy if you disagree. Of course he doesn’t care what truth is, he’s making it up like a tin-pot wannabe dictator in Africa (e.g. this apple didn’t fall far from its horribly racist family tree).
In other related news: police refused to charge Elon Musk for crime even though he posted video of himself in his Tesla clearly breaking the law. Historians may recognize this as similar to when the South African Apartheid state set up parallel and unequal information access and record-keeping regimes to create secrecy and lack of accountability only for… white supremacists.
According to reports, Anil Kapoor, a highly renowned figure, is seeking protection against the unauthorized use of his name, image, and voice for commercial purposes. He wishes to prevent his public image from being depicted in a manner that he considers negative (and denies him control, including profit rights).
…Anil Kapoor is one of the most celebrated and acclaimed successful actors in the industry who has appeared in over 100 films, television shows, and web series. He said Kapoor has also endorsed a large variety of products and services and has appeared in several advertisements as well.
Sounds pretty famous.
Nothing says you have achieved fame like selling out endorsing a large variety of products and services. If you presented to me an AI-generated Anil Kapoor image on a shampoo bottle with some kind of changes (e.g. skin darkened) alongside a non-AI version, I’d be hard-pressed to distinguish authenticity between the two.
But my own ignorance about who this man wants to be seen as is irrelevant to assigning rights for his data, just like when someone says they can’t tell the difference between Elvis and the dozens of Black musicians he stole from. It actually matters that those Black musicians lost their audiences and their income when some young white boy used the latest technology to steal others’ data and give them no control or credit.
This court case of Kapoor centers around the fact that he should decide how he appears, control how others are allowed to portray him, and keep more money for himself if his persona is being used for profit. That doesn’t have much to do with celebrity in my mind, except that it’s easy to say the person is the product. Really it’s a more universal issue that everyone should control their own data, regardless of whether they can sell out endorse a large variety of products and services. But I suppose I have to admit his product endorsements means he has the kind of perceptible loss in revenue that means he’ll stand on behalf of us all.
He apparently even says a Mumbai word for “excellent” in such a way that needs protection from AI, because how he says it always links back to him.
…Kapoor’s counsel Anand submitted that the expression “jhakaas”, a Marathi slang, was popularised by the actor in Hindi films and as per press reports how he expresses the word is exclusively used by him. Anand claimed that Kapoor popularised this term in the 1980s with his unique style and delivery in various films and public appearances. “What’s interesting is this is not jhakaas alone, it’s the way he says it with a twisted lip,” Anand added to which Justice Singh said this is what the HC has to protect and not the word itself.
A twisted lip is interesting? Isn’t the definition of a signature move that its uniqueness means every use is a provable reference, therefore easily protected? I wouldn’t call that interesting, given other similar examples.
Spiky peroxide-blonde-haired Billy Idol repeats the word “masturbatory” three times with a sneer
Sneering while saying the word masturbatory. Clearly nobody but Billy Idol should be allowed to do this.
But seriously, I get the concept of this case is “informational self-determination” (sounds far better in German: informationelle Selbstbestimmung) and so I’m following eagerly along because this is about the future of the Web.
Yet also something doesn’t quite seem right in India.
What actually becomes interesting is a High Court in a country negatively portrays people while arguing that negative portrayals are real harms of huge consequence.
The judge engaged in very obviously racist language in an attempt to explain rights of a celebrity and the damage to his reputation from unwanted negative portrayals.
Justice Singh said while there can be no doubt that free speech about a well-known person is protected in the form of write-ups, parody, satires, criticism etc, which is genuine, but when the same crosses the line and results in tarnishment, blackening or jeopardising the individual’s personality and elements associated with the individual, it is illegal.
Blackening? Excuse me, Justice Singh?
Kapoor must be glad he isn’t black, as the court says blackening him crosses a line into real harm.
“The technological tools that are now available make it possible for any unauthorised user to make use of celebrities’ persona, by using such tools including Artificial Intelligence. The celebrity also enjoys the right of privacy and does not wish that his or her image, or voice is portrayed in a dark manner as is being done on porn websites,” the court added.
Portrayed in a dark manner? Come on.
Are we seriously supposed to believe being portrayed in a dark manner means crime has been committed? Isn’t dark something good? I hear that India can’t get enough dark chocolate lately, for example, claiming somehow all kinds of innovation and health benefits over the awful bad stuff of light chocolate:
High in Antioxidants
May Lower Blood Pressure
Improves Heart Health
Boosts Brain Function
May Lower Cholesterol
Helps Control Blood Sugar
Reduces Stress
They left out “makes justice system less racist”.
I am not in any way endorsing any products here, definitely not saying you should taste the supreme benefits of Royce India dark chocolate. To start with, I claim absolutely no celebrity…
Anyway, you can see in the court statement above the big AI money quote along-side all the racism, in case you were wondering how this case differed from decades of conflicts over pictures or videos.
Racism bubbles up hot and steamy throughout this court’s narrative about protecting a rich and powerful celebrity from any negative depictions.
The Court can’t turn a blind to such misuse of personality’s name and other elements and dilution, tarnishment are all actionable torts which Kapoor would have to be protected against, Justice Singh said.
Don’t turn a blind eye to tarnishment, they say, a word that means… wait a minute… have to look this one up in a dictionary just to be sure… to become darker.
WOT. Again?!
Is there any possible way in India for a High Court to say someone is harmed other than referring to dark as harm and inherently bad?
…the aspiration for white skin can be more directly traced to colonialism much in the way that racism originates with slavery and colonialism. It is with the arrival of the British colonialists that we see specific codified color lines. Unlike previous waves of incursions, the British, with their distinct whiteness, specifically emphasized the separation between themselves and the Indians. A large body of historical and socio-cultural literature has documented the British emphasis on whiteness as a form of racial superiority and their justification of colonization…
Actionable torts, indeed.
For me the case really, seriously begs the deeper question of whether racial discrimination is a tort.
Can a court be repeatedly emphasizing dark is bad and light is good, making obvious negative depictions of huge swaths of society while they claim to be protecting society against unwanted negative depictions?
I mean if someone used generative AI to actually darken Kapoor’s skin as a test case for this court, it seems by their words he would need to be protected from this. No? Self-own. On that same point, a court’s repeated portrayal of darker things as lesser or worse, means they are repeatedly engaging in the very thing they claim is so awful that it must be stopped immediately.