Why Are So Many Tesla Dealerships Being Set on Fire?

Tesla has setup a very tightly controlled and hidden system of car distribution and repairs, which usually tends to hide incompetence. The usual counter argument they will make is that things can be done opaquely better than anyone else could openly, without providing a means to verify such claims.

That bold positioning results in Tesla dealers being thrown into a particular spotlight when they have large destructive fires, illuminating how they aren’t better after all.

Florida had a big one just last year, as did Massachusetts. A giant $300K damage one in California was interesting, particularly because Tesla said they couldn’t figure out the causes. Korea reported one too. And now we have another dealer on fire, confirming Tesla really doesn’t know what it’s doing.

Fort Lauderdale Fire Rescue officials said crews responded to the Tesla dealership property in the 2700 block of North Federal Highway just after 6:30 a.m.

When they arrived, one Tesla was engulfed in flames and the fire was spreading to an adjacent Tesla, officials said.

“I heard a big boom and saw flames coming out of the Tesla dealership,” witness Kassandra Mitchell said.

At this point you might think twice before taking your Tesla to a dealer, let alone buying from one. A private and independent engineer/mechanic is likely to be far more competent than the Tesla ones.

That may not be a viable option, however, since storing and working with the faulty Tesla designs carries huge fire dangers.

Gruber Motors was a repair shop that specialized in fixing Tesla failures, which completely burned down twice already.

As with a previous fire we had in May of 2017, this fire again consumed the entire building and all its contents, which illustrate the potent nature of Lithium Ion vehicles once they ignite.

Gruber had 30 Tesla waiting on repairs and calls itself “independent Tesla service organization”, so when they say LIon vehicles they meant Tesla and only Tesla.

This is not an EV Problem. This is not a battery problem. This is a Tesla problem.

How many Tesla are in your lot?

RSAC SF 2023: Why Are XDR Vendor Pitches Garbage?

Yesterday I listened to Cisco try to argue that musicians can’t play music apart and have their sounds stitched back together.

This absolutely is untrue. Any musician using a time signature with beats per minute can lay a track down without hearing any other musician. Here’s a perfect example. If I record a percussion instrument, it’s not necessary to hear the other musicians. You’re probably thinking a drum beat, but a piano also is a percussion instrument. Go ahead, record your track in a soundproof studio and let someone mix it later with other instruments and vocals. It will be fine. Time after time.

Cringeworthy stuff.

Today Trellix gave a talk where they said if you’re a goalie and block 9 out of 10 shots, then you shoulder all the blame if your team loses 1 to nill.

Again, absolutely untrue. Blame also goes to those on your team who failed to score. In fact, and this is where I really had to laugh, Trellix closed their presentation by arguing you can’t win with defense alone. The speaker literally blamed offense after saying defense will take all the blame.

Offensive hacking out of a SOC run by robots. Wat.

I mean they literally destroyed their own presentation. They gave listeners a case for not sinking any more money into security response and instead floating offensive roles… a shift I have been talking about for at least ten years (yolo ArcSight) and definitely WOULD NOT trust a robot with.

To put it simply the XDR pitch analysis is seriously lacking. There’s no way that trust should go to companies that fail to articulate an understanding of the problems. The music and the sports analogies are flat wrong.

I don’t know if it’s just XDR that is causing these presentations to fall on their face, but these XDR vendors definitely aren’t making sense.

RSA SF Conference (RSAC SF) Day One: Misinformation is the New Malware

Here are my notes from the three-person panel at RSA moderated by Ted Schelin (silicon valley venture capitalist), which to my ears was itself dangerously pushing misinformation. Not only was it far too American in focus (literally asking the world to serve U.S. corporate interests), far too fluffy in pushing unregulated corporations as competent and good while regulators are clumsy and bad, but it made an obvious fatal flaw by suggesting that breaking windows is good for them and their business:

Yoel Roth, academic and ex-Twitter staff

Three part test for censorship

  1. Does the opinion advance a statement of fact
  2. Is this provably false according to experts or data
  3. Is it harmful

Three part test was used to decide what to focus on. Full traffic used to be half-billion posts a day, has gone down lately. Three categories of misinformation in the hundreds of millions, which propagated quickly:

  • Healthcare
  • Political
  • Crisis

Significant quantitative challenge even at just hundreds of millions events.

Attackers are using integrated battlefields and you’ve already lost if you don’t consolidate and get rid of internal diversity/delegations. Individual defenses miss how things play out together. The grey hack of the highest profile Twitter accounts using a spearphish of content moderator is an example. Getting credentials is a playbook that could matter a lot, integrated multi-front approach. Twitter staff, Twitter backend systems, Twitter content cannot be looked at except as a unified problem.

We have an obligation to protect users from Tiktok because it is Chinese tool. The question of America banning it is thorny because responsibility of security community to protect users from a bad platform.

What is misinformation? I was hauled in front of Congress because of misunderstanding about misinformation. A cabal of companies getting together to decide what is right/wrong is bad. Companies working together to share information about Russian farms is doable and I did that starting in 2017. We focused on inauthentic behavior by adversaries we didn’t understand (sophisticated). We need to be clear about what we’re working on, or we end up working on very hard problems with messy vocabulary.

Lisa Kaplan, CEO

Disinformation is fast, cheap and easy to do. 2016 Russian attacks have become commonplace now. Anyone can spin up a network. Chinese are getting more aggressive and focused to undermine the fabric of our economy, democracies in our and other societies.

Competitors launch short-seller attacks.

Just like malware but happening out in the open so it can be caught. Organizations can take steps earlier than malware and stop their stock price from tanking.

(NOTE: this is patently untrue, malware is not only observable it is widely shared)

This is going to get significantly worse, but I’m an optimist. The weakest link is always people.

Think about vaccine disinformation as someone trying to make your staff sick, an anti-competitive practice, preventing people from being able to work.

Misinformation is helping organizations collaborate more across internal departments. CISO get to work with other groups like communications, legal and government affairs. 2019 was seen as a communications problem but savvy organizations now see it as a business problem.

There are threats in countries where they don’t have a First Amendment, attacks on U.S. social media platforms have an impact globally. The world doesn’t have one set of laws. We’ll be better off if people around the world come together and work together to help U.S. companies defend themselves against attacks.

Cathy Gellis, Lawyer

Things you want the law to say no to, the law can’t say no to because the First Amendment shows up.

Wrongness happens and a law that forbids being wrong chills speech and you have a problem. Who decides what is wrong? Government deciding is dangerous because politicians making truth rules will be gamed, government offices would become political prizes because they could control speech.

Platforms need latitude to figure out what is wrong, what ideas and users they want to be associated with. The law doesn’t get to tell them yes and no.

Some laws are enabling and protecting things. Section 230 takes the First Amendment and makes it usable, gives platforms the ability to figure out how they want to moderate speech without fear of interference (do the worst or best without oversight).

Section 230 is so misunderstood people want to take it away without understanding the consequences. Solutions are technical, governments should give private actors all the rope they want.

It’s unlikely a government can ban Tiktok because the concerns about it don’t match the regulations. Capturing data and the privacy loss is a concern and could be regulated. The US doesn’t have a privacy regulation at the federal level. Government doesn’t have something coherent or acting on the actual problem. Some regulators are talking about the content quality and trying to regulate what American kids can learn and Tiktok is evading their government’s censorship.

Questions from audience

Banning Tiktok on government-owned devices is breaking our public signs (cut off by moderator) where will support and funding come to fix things?

Kathy: Amuses me that government bans have bad consequences.

What definition should we use for scoping and finding misinformation threats?

Lisa: Don’t you want to know what everyone is talking about on any malicious domain, state actor, criminal network.

You said information can cause harm. What do you mean by harm?

Yoel: Great question. All things are connected. Caution against rigid definitions.

(NOTE: this is a total contradiction to what was said earlier where he wanted a very tight definition and cautioned against being broad)

Veered Tesla on Interstate Kills Truck Driver

A truck driver on the Yankee Expressway has been killed by Tesla, according to police.

Initial reports indicate that software is unpredictably launching Teslas like missiles across lanes into other vehicles traveling in the same direction.

Recently I reported Tesla’s latest software version — 11.3.4 — seems dangerously unfit for general use.

It is very clear from video evidence that Tesla software now abruptly and without warning sends their cars rapidly off course, despite a straight road and straight navigation path.

A driver has less than a second to grab and turn the wheel to prevent collision with a truck, as explained in that blog post.

This new fatality case has very similar symptoms, but instead of cutting in front of opposing traffic it’s swerving like a drunk into vehicles on same path. And I have to say I just personally experienced this; a Tesla abruptly jumped left across two lanes, nearly crashing into the car I was riding in.

My human driver slammed on the brakes, to prevent a Tesla suddenly accelerating in the same direction as us from hitting our right side.

I mention it here mostly because the Tesla driver cutting left into us had a face of absolute terror. Her hands were in the air, her head twisted back and she stared open mouth at us with eyes wide like she was being abducted.

Back to this new case, police are asking for help understanding how and why Tesla killed the truck driver.

According to the accident summary, Vallier was traveling east on I-84 in the right lane of three, not far from Exit 30.

Meanwhile, a 2023 Model 3 Tesla being driven by a 47-year-old Fairfield woman was traveling east on I-84 in the left lane of three lanes, police said.

[…]

“For an unknown reason,” according to police, the Tesla veered across the center lane and into the right lane, where it collided with the Ford pickup.

As a result of the collision, Vallier lost control of the Ford F250 and crossed the center lane into the left lane and, then, onto the grassy median, where it began to roll over, police said.

Unknown reason? That indicates the police know it wasn’t the usual causes of lateral collision (e.g. blown tire), and they’re investigating things like quality of software engineering. How many police departments are poised for code review?

From my own analysis of emergent flaws in the latest Tesla software release, to personally witnessing resulting erratic behavior… the probability seems very high that software quality of this company is getting significantly worse over time, after their hardware was made less capable (e.g. radar removed).

That being said, Tesla is becoming a clear and present danger to public safety, as I have warned here since 2016.