Huge Tesla Leak: Lawyers Argue NDA Magically Censors All Speech They Don’t Like

Perhaps by now it has become common knowledge Tesla pay is illegally far below market, work benefits are in dangerous unhealthy decline, and staff are particularly screwed should they dare to invoke basic ethical concerns (e.g. discuss customer or worker safety).

He Died Helping Build Tesla’s Gigafactory. Tesla Didn’t Tell Local Officials.

It’s like reading what a car company would be if Queen Isabella came back from the dead to prove her ruthless deadly Inquisition was a business model.

Torture is so cruel, it is so dehumanizing to both the tortured and the torturer, that it is always wrong, unconditionally. “If torture is evil,” [Ron Gassner] writes, “its efficacy is irrelevant. Those who know it to be evil should reject torture outright, regardless of how efficacious it may or may not be.”

What if Tesla’s work culture is always wrong, unconditionally?

A toxic abuse culture of oppression for profit, as history surely tells us, means we should expect whistleblowers and some big leaks about basic morality failures (e.g. claims about the Queen of Tesla ignoring safety).

The drivers named in the leaks and contacted by the newspaper accused Tesla of brushing off their concerns about its Autopilot technology. It is alleged that employees are given strict guidelines over how to reply to complaints, with some drivers claiming that Tesla workers were urged to avoid written communication to “offer as little attack surface as possible.”

Nobody expects the Tesla Inquisition?

Written communication about customer complaints gives… customers an attack surface?

Customers.

Who is the enemy here?

Perhaps Spain is too obscure a reference. Tesla does burn its customers alive, like in the Inquisition, but we’re talking modern technology.

Did a zombie Nixon come back to become the crazy old man of Tesla?

The files shared with Handlesblatt included details of 2,400 complaints about cars accelerating unexpectedly and 1,500 automatic braking problems – including 139 cases of unintentional emergency braking and 383 “phantom stops”. One customer, who complained about his car “phantom braking”, claimed Tesla showed an “absolute lack of any concern given the seriousness of the security problems”.

That’s like an American citizen saying Nixon had an absolute lack of concern about the seriousness of destroying democracy.

Uh, yeah. Duh.

Allegedly Tesla believes they have perfected a draconian NDA that is so regressive they internally consider it a legal sledgehammer that criminalizes any and all speech they don’t like regardless of safety.

The company said a former employee had “misused his access as a service technician to exfiltrate information in violation of his signed non-disclosure agreement, Tesla’s data management policies…”.

Let me guess, the data management policy says if any data involves customer safety concerns it must not be retained.

It does seem like they learned the exact wrong lessons from Nixon.

“Well, the hell with Dean,” Nixon told Haldeman that Monday morning in the Oval Office. “Frankly, I don’t want to have in the record discussions we’ve had in this room on Watergate.” In another conversation later in the day, the president agreed with Haldeman that they ought to “get rid” of the recordings.

But seriously, while trying to cancel the voice of its customers, Tesla has very sloppily leaked things they probably wish they had deleted.

The files include tables containing more than 100,000 names of former and current employees, including the social security number of Tesla CEO Musk, along with private email addresses, phone numbers, salaries of employees, bank details of customers and secret details from production, Handelsblatt reported. The breach would violate the GDPR, it said. If such a violation was proved, Tesla could be fined up to 4 percent of its annual sales, which could be 3.26 billion euros.

If you ask me how Germany was tricked by Tesla’s CEO to open a clearly horrible and degenerative factory riddled with ethical lapses, I’d say read the related American history.

…manipulative, master politician overseeing every detail: approving a “shakedown”… for donations, fixing the price…, orchestrating “dirty tricks” against opponents, thanking the donor of hush money….

Very modern stuff. Still reminds me of 1470s Spain. I don’t think any German politicians have been burned alive… yet.

The bottom line is there should be an outright ban (e.g. Speak Out Act) on Tesla’s obvious abuse of any NDA in automotive safety disputes.

Tesla has a long history of trying to cover up customer complaints about safety problems. As far back as 2016, the National Highway Traffic Safety Administration had to announce that customers were allowed to publicize safety issues after reports that Tesla was requiring customers to sign nondisclosure agreements to qualify for warranty repairs on problematic Model S suspensions systems.

That brings forward the question of whether Tesla should also be investigated for suspicious NDA language used to fill its German factory with the “chaos” of easily manipulated and abused foreign workers who have no clue about safety or rights violations.

When Gregor Lesnik left his pregnant girlfriend in Slovenia for a job [far away in another country], his visa application described specialized skills and said he was a supervisor headed to a [completely different] auto plant.

Turns out, that wasn’t true.

The unemployed electrician had no qualifications to oversee… workers and spoke only a sentence or two of [the required foreign language]. He never set foot in [the facility that was written into his papers, a direct competitor to Tesla, to defraud the government]. The companies that arranged his questionable visa instead sent Lesnik to a menial job…. He earned the equivalent of $5 an hour to expand the plant for… Tesla.

Lesnik’s three-month tenure ended a year ago in a serious injury and a lawsuit that has exposed a troubling practice… [of Tesla lying to everyone about everything while censoring others].

That’s just a hint of what could be ahead for any journalist brave enough to interview silenced Tesla factory workers in Germany… as predicted by a 2016 report that didn’t get nearly enough attention.

The part where they report Tesla had lawyers take desperate unskilled Eastern Europeans to apply for visas for work at a BMW factory, and then under strict NDA illegally redirected them into Tesla factory jobs with no qualification or safety… it’s so evil, you can’t make this stuff up.

Think about it. If their NDA-based visa fraud gets investigated, Tesla planted false flags to misdirect authorities for its competitors to get in trouble. That’s a very 1980s South African way of undermining government while wrecking markets too.

Tesla reminds me of when Ford was excitedly pushing politicians to increase production of its cars in Germany using slave labor from Eastern Europe. Who else remembers?

Right Germany?

I mean, right? Extreme right?

Did you Nazi this coming?

Fingerprint Brute Force Easily Breaks Android Phones

The old joke was that putting fingerprint readers on devices meant people would get their hands cut off, or at least they’d be drugged to use their hands without consent.

Source: https://xkcd.com/538/

Of course we haven’t heard much about such “rubber hose” attacks, even as fingerprint readers have been put into practice everywhere on everything. The best real world threat so far, perhaps, has been “gummy bear” integrity attacks a decade ago.

A new research paper tries to bring sensor security back into focus by demonstrating a simple brute force method, which only seems to work on Android phones due to… Google’s infamously low security bar.

An attacker is meant to take a fingerprint database — given open research or centralized collection resulting in inevitable giant leaks — with about $20 in hardware, to quickly brute force any Android device.

Because fingerprints are generally low integrity to start with, researchers point out how they benefited from using a simple manipulation of “false acceptance rate” (FAR). That’s as bad as it sounds. Your safety depends on an adjustable acceptance rate of bad authentication. How many bad attempts would you like to treat as good ones? In addition, they mention how the serial peripheral interface (SPI) on Android can be compromised to leak fingerprints (while iOS by comparison reasonably encrypts the SPI).

When just one fingerprint is enrolled the researchers estimate 3-14 hours to brute force their way through. When more than one fingerprint is in the SPI, their estimate is only 1/2 to 3 hours to force a collision!

∀F(Fx ↔ Fy) → x=y

The big question becomes whether fingerprint gathering methods to produce a specialized database (e.g. things thrown into the trash by targets), instead of generalized prints, would reduce times even more dramatically. Then again with this 1/2 hour rating given $20 tools, why bother?

Are Tesla Cars Being Trained to Cause More Crashes?

TRIGGER WARNING: This post has important ideas and may cause the reader to think deeply about the Sherman-like doctrine of property destruction to save lives. Those easily offended by concepts like human rights should perhaps instead read the American Edition.


Let us say, hypothetically, that Tesla tells everyone they want to improve safety while secretly they aim to make roads far less safe.

Here is further food for thought. Researchers have proven that AI developed under a pretense of safety improvement can easily be flipped to do the opposite and cause massive harms. They called their report “dual-use discovery”, as if any common tool like a chef knife or a hammer are “dual-use” when someone can figure out how to weapanize things. Is that really a second discovery, the only other use option… being that it’s the worst one?

According to The Verge, these researchers took AI models intended to predict toxicity, which is billed usually as a helpful risk prevention step, and then instead trained them to increase toxicity.

It took less than six hours for drug-developing AI to invent 40,000 potentially lethal molecules. Researchers put AI normally used to search for helpful drugs into a kind of “bad actor” mode to show how easily it could be abused at a biological arms control conference.

Potentially lethal. Theoretically dead.

The use-case dilemma of hacking “intelligence” (processed data) is a lot more complicated than the usual debate about how hunting rifles are different from military assault rifles, or that flame throwers have no practical purposes at all.

One reason it is more complicated is America generally has been desensitized to high fatality rates from harmful application of automation machines (e.g. after her IP was stolen the cotton engine service model — or ‘gin — went from being Caty Greene’s abolitionist invention to a bogus justification for expansion of slavery all the way to Civil War). Car crashes often are treated as “individual” decisions given unique risk conditions, rather than seen as a systemic failure of a society rotating around profit from criminalization of poverty.

Imagine asking things like what is the purpose of the data related to use of a tool, measuring how is it being operated/purposed, and can a systemic failure be proven by examining it from origin to application (e.g. lawn darts or the infamous “Audi pedal“)? Is there any proof of failsafe or safety?

Lots of logic puzzles come up in threat models, which most people are nowhere near prepared to answer at the tree let alone forest level… perhaps putting us all in immediate fire danger without much warning.

Despite complexity, such problems actually are increasingly easily expressed in real terms. Ten years ago when I talked about it, audiences didn’t seem to digest my warnings. Today, people right away understand exactly what I mean by a single algorithm that controls millions of cars all simultaneously turned into a geographically dispersed “bad actor” swarm.

Tesla.

Where is this notoriously bad actor with regard to transparency and even proof on such issues? Can they prove cars are not, and can not be, trained as a country-wide loitering munition to cause mass casualties?

What if their uniquely bad death tolls already mounting are a result of them since 2016 developing AI (ignoring, allowing, enabling or performing) such that their crashes have been increasing in volume, variety and velocity due to an organized intentional disregard for law and order?

ChatGPT, what do you think, given that Tesla now claims that you were its creation?

Click to enlarge