Texas Recruits Armed Vigilantes With Immunity From Prosecution

Anyone in the world looking to cause some real harm to society, look no further than Texas.

The state is proposing a return to immunity from prosecution for anyone who signs up to commit crimes against humanity.

In 2021 a National Book Award went to a poet who described this Texas concept as…

…vigilantes hooded like blind angels, hunting with torches for men the color of night…

More to the point, this poet was reflecting how Texas implemented this under Woodrow Wilson’s nativist, xenophobic, genocidal platform called “America First“.

Dousing groups of Mexicans with kerosene and then burning them was also a topic of discussion for Americans on March 10, 1916 after the Battle of Columbus. Over 60 dead men were piled together, their bodies incinerated. Keep in mind this all was in the context of Americans a year earlier calling for the “extermination” of non-whites, which led to killing thousands of Americans who were of Mexican descent.

Let’s be honest.

Texas pioneered the kind of unaccountable racist vigilantism that Nazi Germany studied and applied in Europe.

Unlike Germany, however, America has never been held accountable and Texas is front and center in that issue. Hitler named his personal train “Amerika” (to honor genocide), and we can only wonder why he didn’t specify Texas.

Imagine someone in Germany proposing a bill to bring Nazi practices back. Impossible. In Texas, however, it’s hard to imagine someone NOT proposing a return to its worst chapters in history.

China Mocks NRA. Ships M16 to Russia Marked as “Civilian Hunting Rifles”

Someone in supply chain law clearly believed that mislabeling a military assault rifle gave them a loophole to drive trucks through.

China North Industries Group Corporation Limited, one of the country’s largest state-owned defense contractors, sent the rifles in June 2022 to a Russian company called Tekhkrim that also does business with the Russian state and military. The CQ-A rifles, modeled off of the M16 but tagged as “civilian hunting rifles” in the data, have been reported to be in use by paramilitary police in China and by armed forces from the Philippines to South Sudan and Paraguay.

Look at all those places listed where civilians are being hunted… just like in America.

As one hunter put it in the comments section of an article on americanhunter.org, “I served in the military and the M16A2/M4 was the weapon I used for 20 years. It is first and foremost designed as an assault weapon platform, no matter what the spin. A hunter does not need a semi-automatic rifle to hunt, if he does he sucks, and should go play video games. I see more men running around the bush all cammo’d up with assault vests and face paint with tricked out AR’s. These are not hunters but wannabe weekend warriors.”

China appreciates the NRA, obviously, a little too much.

Massive Tesla Privacy Breach Exposes Culture of Cruelty and Customer Abuse

Privacy in a Tesla vehicle is non-existent, apparently. Interesting to think Tesla customers actually paid for this treatment.

…between 2019 and 2022, groups of Tesla employees privately shared via an internal messaging system sometimes highly invasive videos and images recorded by customers’ car cameras, according to interviews by Reuters with nine former employees.

Some of the recordings caught Tesla customers in embarrassing situations. One ex-employee described a video of a man approaching a vehicle completely naked.

Also shared: crashes and road-rage incidents. One crash video in 2021 showed a Tesla driving at high speed in a residential area hitting a child riding a bike, according to another ex-employee. The child flew in one direction, the bike in another. The video spread around a Tesla office in San Mateo, California, via private one-on-one chats, “like wildfire,” the ex-employee said.

Video recordings were being made and then viewed by Tesla staff even when a car was parked, even when a car was turned off. In other words, the cameras are billed as safety devices, yet they potentially were on all the time and without Tesla owners being aware.

Tesla states in its online “Customer Privacy Notice” that its “camera recordings remain anonymous and are not linked to you or your vehicle.” But seven former employees told Reuters the computer program they used at work could show the location of recordings – which potentially could reveal where a Tesla owner lived.

One ex-employee also said that some recordings appeared to have been made when cars were parked and turned off. Several years ago, Tesla would receive video recordings from its vehicles even when they were off, if owners gave consent. It has since stopped doing so.

It has stopped? Prove that is true.

[Investigators have not been] able to determine if the practice of sharing recordings, which occurred within some parts of Tesla as recently as last year, continues today or how widespread it was.

There is no reason for Tesla staff to be pulling up videos of the inside of people’s garages from parked cars that have been turned off, especially given personally identifiable data and how the frames will have absolutely nothing to do with safety. It seems like a culture of abuse and little more.

If this were a hospital, for example, we’d be talking about doctors and nurses who engage in grossly negligent safety practices, violating patient privacy at large scale.

“We could see inside people’s garages and their private properties,” said another former employee. “Let’s say that a Tesla customer had something in their garage that was distinctive, you know, people would post those kinds of things.” […] About three years ago, some employees stumbled upon and shared a video of a unique [object] inside a garage, according to two people who viewed it.

Stumbled? Like the employees were drunk?

Two ex-employees said they weren’t bothered by the sharing of images, saying that customers had given their consent or that people long ago had given up any reasonable expectation of keeping personal data private. Three others, however, said they were troubled by it.

“It was a breach of privacy, to be honest. And I always joked that I would never buy a Tesla after seeing how they treated some of these people,” said one former employee.

Another said: “I’m bothered by it because the people who buy the car, I don’t think they know that their privacy is, like, not respected … We could see them doing laundry and really intimate things. We could see their kids.”

Drunk with cruelty from abuse of power. In related news, Gartner is strongly advising companies to “weaponize” privacy — encouraging competitors to shoot Tesla dead.

“Weaponise privacy as a prospect conversation tool and a competitive advantage,” said Neubauer. “By making privacy a key part of your customer value proposition, privacy has become a conviction-based motivator for buyers. Just as people reach for organic or cruelty-free products, consumers are willing to go out of their way, and in some instances, pay a premium for a product they believe will care best for their data.”

Cruelty-free products? Pretty sure Gartner just defined Tesla as a cruel and worthless product that doesn’t care for privacy.

Data Integrity Breaches Are Killing Trust in AI

Here’s the money quote from Roger McNamee

So long as we build AIs on lousy content, the results are going to be lousy. AI will be right some of the time, but you won’t be able to tell if the answer is right or wrong without doing further research, which defeats the purpose.

I generally disagree with a GIGO (garbage in, garbage out) meme, but here I love that McNamee calls out the lack of value. You ask the computer for the meaning of life and it spits out 42? Who can tell if that’s right unless they do the math themselves?

Actually, it gets even better.

Engineers have the option of training AIs on content created by experts, but few choose that path, due to cost.

Cost? Cost of quality data?

That’s a symptom of the last decade. Many rushed into an unregulated “data lake” mentality to amass quantity (variety and volume at velocity), with a total disregard for quality.

Get as many dots as possible so you can someday connect them (a sort of rabid data consumption and hoarding mindset) gradually has given way to collect only the things you can use.

While McNamee claims to be writing about democracy, what he’s really saying is that the market is ripe for a data innovation revolution that reduces integrity breaches.

Technology solutions desperately need to be brought into such “save our democracy” discussions, rooted in practical solutions.

A simple example is the W3C Solid protocol. It’s technology that gives real and present steps towards the right thing to do; gets AI companies far ahead of the baseline of safety now looming from smart regulators like Italy.

Taking regulatory action against one of the worst abusers of users, OpenAI, is definitely the right move here.

Last week, the Italian Data Protection Watchdog ordered OpenAI to temporarily cease processing Italian users’ data amid a probe into a suspected breach of Europe’s strict privacy regulations. The regulator, which is also known as Garante, cited a data breach at OpenAI which allowed users to view the titles of conversations other users were having with the chatbot. There “appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies,” Garante said in a statement Friday. Garante also flagged worries over a lack of age restrictions on ChatGPT, and how the chatbot can serve factually incorrect information in its responses. OpenAI, which is backed by Microsoft, risks facing a fine of 20 million euros ($21.8 million), or 4% of its global annual revenue, if it doesn’t come up with remedies to the situation in 20 days.

It’s the right move because breach reported by users of OpenAI is far worse than the company is admitting, mainly because integrity failures are not regulated well enough to force disclosure (falling far behind confidentiality/privacy laws).

20 days? That should be more than enough time for a company that rapidly dumps unsafe engineering into the public domain. I’m sure they’ll have a fix pushed to production in 20 hours. And then another one. And then another one…

But seriously, systemic and lasting remedies they need (such as building personal data stores so owners can curate quality) have been sitting right in front of them. Maybe the public loss of trust from integrity breaches, coupled with regulatory action, will force the necessary AI innovation.