China Mocks NRA. Ships M16 to Russia Marked as “Civilian Hunting Rifles”

Someone in supply chain law clearly believed that mislabeling a military assault rifle gave them a loophole to drive trucks through.

China North Industries Group Corporation Limited, one of the country’s largest state-owned defense contractors, sent the rifles in June 2022 to a Russian company called Tekhkrim that also does business with the Russian state and military. The CQ-A rifles, modeled off of the M16 but tagged as “civilian hunting rifles” in the data, have been reported to be in use by paramilitary police in China and by armed forces from the Philippines to South Sudan and Paraguay.

Look at all those places listed where civilians are being hunted… just like in America.

As one hunter put it in the comments section of an article on americanhunter.org, “I served in the military and the M16A2/M4 was the weapon I used for 20 years. It is first and foremost designed as an assault weapon platform, no matter what the spin. A hunter does not need a semi-automatic rifle to hunt, if he does he sucks, and should go play video games. I see more men running around the bush all cammo’d up with assault vests and face paint with tricked out AR’s. These are not hunters but wannabe weekend warriors.”

China appreciates the NRA, obviously, a little too much.

Massive Tesla Privacy Breach Exposes Culture of Cruelty and Customer Abuse

Privacy in a Tesla vehicle is non-existent, apparently. Interesting to think Tesla customers actually paid for this treatment.

…between 2019 and 2022, groups of Tesla employees privately shared via an internal messaging system sometimes highly invasive videos and images recorded by customers’ car cameras, according to interviews by Reuters with nine former employees.

Some of the recordings caught Tesla customers in embarrassing situations. One ex-employee described a video of a man approaching a vehicle completely naked.

Also shared: crashes and road-rage incidents. One crash video in 2021 showed a Tesla driving at high speed in a residential area hitting a child riding a bike, according to another ex-employee. The child flew in one direction, the bike in another. The video spread around a Tesla office in San Mateo, California, via private one-on-one chats, “like wildfire,” the ex-employee said.

Video recordings were being made and then viewed by Tesla staff even when a car was parked, even when a car was turned off. In other words, the cameras are billed as safety devices, yet they potentially were on all the time and without Tesla owners being aware.

Tesla states in its online “Customer Privacy Notice” that its “camera recordings remain anonymous and are not linked to you or your vehicle.” But seven former employees told Reuters the computer program they used at work could show the location of recordings – which potentially could reveal where a Tesla owner lived.

One ex-employee also said that some recordings appeared to have been made when cars were parked and turned off. Several years ago, Tesla would receive video recordings from its vehicles even when they were off, if owners gave consent. It has since stopped doing so.

It has stopped? Prove that is true.

[Investigators have not been] able to determine if the practice of sharing recordings, which occurred within some parts of Tesla as recently as last year, continues today or how widespread it was.

There is no reason for Tesla staff to be pulling up videos of the inside of people’s garages from parked cars that have been turned off, especially given personally identifiable data and how the frames will have absolutely nothing to do with safety. It seems like a culture of abuse and little more.

If this were a hospital, for example, we’d be talking about doctors and nurses who engage in grossly negligent safety practices, violating patient privacy at large scale.

“We could see inside people’s garages and their private properties,” said another former employee. “Let’s say that a Tesla customer had something in their garage that was distinctive, you know, people would post those kinds of things.” […] About three years ago, some employees stumbled upon and shared a video of a unique [object] inside a garage, according to two people who viewed it.

Stumbled? Like the employees were drunk?

Two ex-employees said they weren’t bothered by the sharing of images, saying that customers had given their consent or that people long ago had given up any reasonable expectation of keeping personal data private. Three others, however, said they were troubled by it.

“It was a breach of privacy, to be honest. And I always joked that I would never buy a Tesla after seeing how they treated some of these people,” said one former employee.

Another said: “I’m bothered by it because the people who buy the car, I don’t think they know that their privacy is, like, not respected … We could see them doing laundry and really intimate things. We could see their kids.”

Drunk with cruelty from abuse of power. In related news, Gartner is strongly advising companies to “weaponize” privacy — encouraging competitors to shoot Tesla dead.

“Weaponise privacy as a prospect conversation tool and a competitive advantage,” said Neubauer. “By making privacy a key part of your customer value proposition, privacy has become a conviction-based motivator for buyers. Just as people reach for organic or cruelty-free products, consumers are willing to go out of their way, and in some instances, pay a premium for a product they believe will care best for their data.”

Cruelty-free products? Pretty sure Gartner just defined Tesla as a cruel and worthless product that doesn’t care for privacy.

Data Integrity Breaches Are Killing Trust in AI

Here’s the money quote from Roger McNamee

So long as we build AIs on lousy content, the results are going to be lousy. AI will be right some of the time, but you won’t be able to tell if the answer is right or wrong without doing further research, which defeats the purpose.

I generally disagree with a GIGO (garbage in, garbage out) meme, but here I love that McNamee calls out the lack of value. You ask the computer for the meaning of life and it spits out 42? Who can tell if that’s right unless they do the math themselves?

Actually, it gets even better.

Engineers have the option of training AIs on content created by experts, but few choose that path, due to cost.

Cost? Cost of quality data?

That’s a symptom of the last decade. Many rushed into an unregulated “data lake” mentality to amass quantity (variety and volume at velocity), with a total disregard for quality.

Get as many dots as possible so you can someday connect them (a sort of rabid data consumption and hoarding mindset) gradually has given way to collect only the things you can use.

While McNamee claims to be writing about democracy, what he’s really saying is that the market is ripe for a data innovation revolution that reduces integrity breaches.

Technology solutions desperately need to be brought into such “save our democracy” discussions, rooted in practical solutions.

A simple example is the W3C Solid protocol. It’s technology that gives real and present steps towards the right thing to do; gets AI companies far ahead of the baseline of safety now looming from smart regulators like Italy.

Taking regulatory action against one of the worst abusers of users, OpenAI, is definitely the right move here.

Last week, the Italian Data Protection Watchdog ordered OpenAI to temporarily cease processing Italian users’ data amid a probe into a suspected breach of Europe’s strict privacy regulations. The regulator, which is also known as Garante, cited a data breach at OpenAI which allowed users to view the titles of conversations other users were having with the chatbot. There “appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies,” Garante said in a statement Friday. Garante also flagged worries over a lack of age restrictions on ChatGPT, and how the chatbot can serve factually incorrect information in its responses. OpenAI, which is backed by Microsoft, risks facing a fine of 20 million euros ($21.8 million), or 4% of its global annual revenue, if it doesn’t come up with remedies to the situation in 20 days.

It’s the right move because breach reported by users of OpenAI is far worse than the company is admitting, mainly because integrity failures are not regulated well enough to force disclosure (falling far behind confidentiality/privacy laws).

20 days? That should be more than enough time for a company that rapidly dumps unsafe engineering into the public domain. I’m sure they’ll have a fix pushed to production in 20 hours. And then another one. And then another one…

But seriously, systemic and lasting remedies they need (such as building personal data stores so owners can curate quality) have been sitting right in front of them. Maybe the public loss of trust from integrity breaches, coupled with regulatory action, will force the necessary AI innovation.

Ukrainians decisively reject Russian narratives of internal divisions

Here’s some context in a new Carnegie Europe report for the recent Russian Telegram star assassination.

Established within the National Security Council in 2021, the Center on Countering Disinformation debunks Russia’s manipulative and misleading narratives, including through social media platforms. This is a formidable task as many of these platforms, especially Telegram, have become a safe haven for disinformation due to lack of scrutiny and proper moderation policies.

Especially Telegram.

The tone of this report emphasizes how Ukraine easily regulates and rebuffs disinformation using curated sources of trusted information.

Investigative journalists and civil society organizations, such as StopFake and Detector Media, complement governmental efforts in checking facts and providing accurate information. A December opinion poll found that Ukrainians, including in the most vulnerable southern and eastern regions, decisively reject Russian narratives of internal divisions and Western betrayal of the country.

We see Ukraine described in terms of protecting the most vulnerable and preventing harms.

The report continues to say heavy regulation, including forced breakup of oligarchial control over media, is Ukraine’s charted path for freedom of speech.

Ukraine’s resilience in the information war has created momentum for deepening reforms to preserve media freedom and pluralism of views. As a part of the conditionality for membership, the EU called for introducing legislative norms that would regulate the media sector in accordance with its directives in this field. In December 2022, the parliament passed the required law. If properly implemented, the law would not only strengthen the instruments to counter Russian disinformation but also develop norms to ensure transparency and the independence of media from undue political influence.

All of this points towards Russia being the most likely motivated assassin of its own journalists.

First, it’s the common tool of Putin. Second, the Russian victim early could have stepped over a line that triggered the dictator’s press-killing secret police. Third, internal divisions in Russia are growing severely over bungling mismanagement of war with Ukraine.

The question about the assassination is really how could it not be Russians killing each other? Ukraine hasn’t needed to resort to such tactics, given its commanding control over the information domain.

While Ukrainians show steady resistance to narratives of internal division, Russia (like a Tesla factory) viciously attacks its own top performers to kill speech about obvious internal fragmentation.

That being said, an explosion is uncommon and unusual for Russian state assassins. It’s somewhat significant for being in a Russian city lounge being “guarded” by far-right miltants.

The attack carries hallmarks of Russian domestic anti-war extremists.

The primary target wasn’t a journalist or reporter in the usual sense. He had been a coal miner and jailed in Ukraine for bank robbery. In 2014 he “escaped” with Russian help to become a militant separatist within Ukraine. His Telegram role essentially was Russian puppet coddled by military handlers inserting him into high-risk war zones to generate disinformation. You can see why he thought he was safe and where.

Obviously the victim being targeted while in a plainly vulnerable Petersburg cafe, surrounded by at least two dozen of his fans (13% of Russian Telegram users are in that city, second only to Moscow), sends a strong message of resistance to Russians.

Or as Ukrainians have expertly explained:

“Spiders are eating each other in a jar,” Ukrainian presidential adviser Mykhailo Podolyak wrote in English…. “Question of when domestic terrorism would become an instrument of internal political fight was a matter of time.”

A Ukrainian pro-Russian militant extremist propaganda leader, who promoted killing civilians (“we will kill everyone, we will rob everyone”), seems to have been killed in a civilian setting by Russian anti-war militant extremists.

An assassination doesn’t fit within increasing Ukrainian success in disarming disinformation at every level. I mean they wouldn’t have any real need to expend the kind of heavy effort to physically target such a mediocre blogger from Moscow visiting his Petersburg fans.

That doesn’t mean it wasn’t Ukraine, just that it has stronger hallmarks of local action. And if Russian authorities crack down even harder on expression now, it becomes increasingly difficult to argue any increase in incidents inside Russia isn’t inevitable domestic resistance.

I’ve been asked about the explosive, and it seems far too early to make that kind of call. I’m reminded of giving a talk at a mystery writer’s conference about how to hack into computers, where the distinguished speaker immediately before me was an explosives expert describing how to assassinate people (mostly with cars). Apparently there’s some kind of shared theme here? My, how times change.

We certainly know the target was killed with 30 people around him injured, suggesting very high precision planning. Since the statue was unexpected as a shiny gift in a box, and in the image of the target himself, it seems an obvious play on a Telegram star’s glaring insecurity — curated, not just plain explosives. And there does seem to be a thread that suggests the statue was part of a compound attack, somehow causing the target highest proximity harm from an earlier planted incendiary.

But what do I know about those things, I’m just the computer guy who studies information integrity.