“All Show No Go” Truck Fiasco is a Monument to the Fraud of Tesla

In 2019 we all watched the sleazy car salesman tactics get 250,000 people to pay $100 for nothing.

Worse than nothing, they paid for the promise of a “tough” truck that immediately was revealed as fragile.

Ford demonstrates their Pinto safety design.

You may remember LEGO cleverly mocked this slime spectacle with what seemed to be a far superior toy truck design.

To put it another way, context matters here. LEGO puts a huge amount of engineering and careful craftsmanship into their vehicle replicas. Their recreations of famous cars are truly impressive at any scale.

Vehicle engineering typical of LEGO, in case their mockery of Tesla “genius” isn’t obvious.

So when LEGO threw together a minimal effort block they described as an improved version of the silly Tesla Truck design craze, it was literal mockery of inflated egos at Tesla peddling sadly simplistic ideas and low skills. LEGO slam dunked on the spectacle, wisely foreshadowing the truck’s predictable failures.

FastCompany is now laughing out loud at the little dictator running Tesla, after he just threw up his hands and issued an edict that the Truck must be built like a LEGO.

The problem, according to Musk, is the bright metal construction and predominantly straight edges mean that even minor inconsistencies become glaringly obvious. To avoid this, he commanded unparalleled precision in the manufacturing process, stating in his email that “all parts for this vehicle, whether internal or from suppliers, need to be designed and built to sub 10 micron accuracy. That means all part dimensions need to be to the third decimal place in millimeters and tolerances need [to] be specified in single digit microns.” …Musk added, “If LEGO and soda cans, which are very low cost, can do this, so can we.”

Commanded? Demanded? Unhinged.

If LEGO and soda cans can do this, why can’t a flamethrower at 100 meters perfectly turn an apple on my head into a delicious pie? I command you peons to make my fantasy a reality and if you fail I’ll just find more peons who keep believing.

Herr Musk seems raised on the privilege of an unrelenting pursuit of selfish fantasy and unable to grasp basic reality. His toddler-like curations of design based on mysticism, as if they could replace actual engineering knowledge, soon may have his legions of unskilled enablers/believers headed for a rough and abrupt awakening.

What do you call it when a giant flat shiny steel panel after three years still produces the exact opposite effect of what was promised to a quarter-million people who put money down?

Advance fee fraud truck.

The dumb design promised to be on the road by 2021 is a failure by almost every measure, a monument to a sheltered elitist South African apartheid boy pushing symbolism over substance. America should take down the 1920’s statues of General Lee and mount the 2020’s Cyber Truck on columns instead. Start renaming the overtly racist failure of Lee Street to Cyber Truck Lane. Same stuff, lessons not learned, 100 years later.

At this point you have to ask how a car company can exist let alone be valued when it so very obnoxiously shows it can’t handle even the basics of car design.

Studebaker folded for less.

Altman’s OpenAI and WorldCoin Might Just Be Lying to Everyone on Purpose

Lately when people ask me about OpenAI’s ChatGPT just lying so brazenly, intentionally misstating facts and fabricating events, I explain that’s likely the purpose of the tool. It aims only to please, not ever to be “right” in any real sense let alone have any integrity.

ChatGPT lies and lies and lies. When caught and you ask it to stop lying, it suggests removing attempts to be factual from its responses. This is like asking a waiter for soup and being presented with an obviously unsafe/dirty bowl. If you tell them to not violate health codes the waiter offers a “clean” bowl filled with soap and water. Inedible puts it mildly, since egregious code violation is more like it.

Over time the AI company has been getting worse, based on extensive direct experiences while trying to help law firms investigate fraud among the platforms offering chat services. Lately the ChatGPT software, for example, has tried to convince its users that the U.S. Supreme Court in fact banned the use of seatbelts in cars due to giant court cases in the 1980s… cases that SIMPLY DO NOT EXIST for a premise that is an OBVIOUS LIE.

I hate calling any of this hallucinations because at the end of the day the software doesn’t understand reality or context so EVERYTHING is says is a hallucination and NOTHING is trustworthy. The fact that it up-sells itself being “here” to provide accuracy, while regularly failing to be accurate and without accountability, is a huge problem. A cook who says they are “here” to provide dinner yet can NOT make something safe to eat is how valuable? (Don’t answer if you drink Coke).

Ignoring reality while claiming to have a very valuable and special version of it is appearing to be a hallmark of the Sam Altman brands, building a record of unsafely rushing past stop signs and ignoring red lights like he’s a robot made by Tesla making robots like Tesla.

He was never punished for those false statements, as long as he had a new big idea to throw to rabid investors and a credulous media.

Fraud. Come on regulators, it’s time to put these charlatans back in a box where they can’t do so much harm.

Fun fact, the CTO of OpenAI shifted from being a Goldman Sachs intern to being “in charge” of a catastrophically overpromised and underdelivered unsafe AI product of Tesla. It’s a wonder she hasn’t been charged with over 40 deaths.

Here’s more evidence on the CEO, from the latest about his WorldCoin fiasco:

…ignored initial order to stop iris scans in Kenya, records show. …failed to obtain valid consent from people before scanning their irises, saying its agents failed to inform its subjects about the data security and privacy measures it took, and how the data collected would be used or processed. …used deceptive marketing practices, was collecting more personal data than it acknowledged, and failed to obtain meaningful informed consent…

Sam Altman runs a company that failed to stop when ordered to do so, continued to operate immorally and violate basic safety, as if “never punished”.

This is important food for thought, especially given OpenAI has lately taken to marketing wild, speculative future-leaning promises about magically achieving “Enterprise” safety certifications long before it has done the actual work.

Trust them? They are throwing out a lot of desperate-to-please big ideas for rabid investors, yet there’s still zero evidence they can be trusted.

Perfect example? In their FAQ about privacy it makes a very hollow-sounding yet eager-to-please statement that they have been audited (NOT the same as stating they are compliant with requirements):

Fundamentally, these companies seem to operate as though they can be above the law, peddling intentional hallucinations to placate certain people into being trapped by a “nice and happy” society in the worst ways possible… reminiscent of drug dealers peddling political power-grabs and fiction addiction.

“In the United States there are no Peugeot or Renault cars!”

Here’s how a Peace Corp veteran tries to illustrate the presence and effects of French colonialism.

I shall never forget a Comorian friend’s reaction to his first trip to the United States. Arriving back in Moroni, rather than enthusiastically describing skyscrapers, fast food, and cable TV, his singular observation was that in the United States there are no Peugeot or Renault cars! This piece of technology, essential to Comorian life, had always been French, and this Comorian was shocked to learn that there were alternatives.

Why would anyone ever expect someone with access to daily delicious fresh fruit and fish to ever enthusiastically describe… fast food?

Yuck!

Skyscrapers?

Wat.

The French passed draconian laws and did worse to require colonies (especially former ones) to only buy French exports. I get it, yup I do. So Comorians lived under artificial monopoly, and only knew French brands. Kind of like how the typical American who visits France says “I need a coffee, where’s the Starbucks?” Or the American says “I need to talk with my family and friends, where’s the Facebook?”

Surely being forced by frogs into their dilapidated cars, however, still rates quite far above entering into a health disaster of American fast food. A Comorian losing access to the delicious, locally made slow high nutrition cuisine is nightmare stuff.

But seriously…

“This piece of technology, essential to Comorian life” is a straight up Peace Corp lie about cars.

Everyone (especially the bumbling French DGSE) knows a complicated expensive cage on four wheels is unessential to island life, inefficient, and only recently introduced. Quality of life improves inversely to the number of cars on a single lane mountain road.

Motorbikes? That’s another story entirely, as an actual “unexpected” power differential, which the Israelis, Afghans, Chinese and lately Ukranians very clearly know far too well (chasing British and Japanese lessons).

Go home Peace Corp guy, your boring big car ride to a lifeless big skyscraper box filled with tastless Big Mac is waiting. Comorians deserve better. American interventionists should try to improve conditions locally and appropriately, not just drive former colonies so far backwards they start missing French cars.

Deception From a BlackHat Conference Keynote About Regulating AI

One note from this years’ typical out-of-tune “BlackHat” festivities in Las Vegas has struck a particularly dissonant chord — a statement that regulators habitually trail behind the currents of innovation.

A main stage speaker expressed strange sentiment and stood in contrast to the essence of good regulation itself, which strives to act as an anticipatory framework for an evolving future (like how regulators of a car such as brakes and suspension are designed for the next turn, and not just the last one).

At the conference’s forefront a keynote gave what the speaker wanted people to believe as a historical pattern: governmental entities often mired in reactionary postures unable to get “ahead” of emerging technologies.

Unlike in the time of the software boom when the internet first became public, Moss said, regulators are now moving quickly to make structured rules for AI.

“We’ve never really seen governments get ahead of things,” he said. “And so this means, unlike the previous era, we have a chance to participate in the rule-making.”

Uh, NO. As much as it’s clear the keynote was attempting to generate buzz, excitement from opportunity, its premise is entirely false. This is NOT unlike any previous era in the situation as stated. In fact, AI is as old if not older than the Internet.

If what Moss said were even remotely true about regulators moving quickly to get ahead of AI, unlike other technology, then the present calendar year should read 1973 NOT 2023.

Regulations in fact were way ahead of the Internet and often credited for its evolutionary phases, for better or worse, making another rather obvious counter-point.

  • Communications Act of 1934 reduced anti-government political propaganda (e.g. Hearst’s overt promotion of Hitler). It created the FCC to break through corporate-monopolistic grip over markets from certain extreme-right large broadcast platforms and promoted federated small markets of (anti-fascist) publishers/broadcasters… to help with the anticipated emerging conflict with Nazi Germany.
  • Advanced Research Projects Agency Network, funded by the U.S. Department of Defense, in 1969 created the “first workable prototype of the Internet” realizing a 1940s vision of future decentralized peering through node-to-node communications.
  • Telecommunications Act of 1996 inverted 1934 regulatory sentiment by removing content safety prohibiting extreme-right propaganda (predictably restarting targeted ad methods to manipulate sentiment, even among children), and encouraged monopolization through convergence of telephones, television and Internet.

I blame Newt “Ideas” Gringrich for the extremely heavy dose of crazy that went into that 1996 Act.

You hear Gingrich’s staff has these five file cabinets, four big ones and one little tiny one. No. 1 is ‘Newt’s Ideas.’ No. 2, ‘Newt’s Ideas.’ No. 3, No. 4, ‘Newt’s Ideas.’ The little one is ‘Newt’s Good Ideas.’

And I did say better AND worse, as some regulators warned about in 1944.

…inventions created by modern science can be used either to subjugate or liberate. The choice is up to us. […] It was Hitler’s claim that he eliminated all unemployment in Germany. Neither is there unemployment in a prison camp.

But perhaps even more fundamentally, Moss overlooks the fundamental nature of regulation — an instrument that historically has projected forward, preemptively calibrating guidelines to steer innovation and get ahead of things by navigating toward things like responsible or moral avenues to measure progress.

Moss’s viewpoint can be likened to a piece of a broader history of “BlackHat” disinformation tactics, which ironically wants to send out a call for proactive strides in outlining rules for the burgeoning realm of artificial intelligence because regulators are behind the times, while simultaneously decrying regulators who are doing too much too soon by not leaving unaccountable play space for the most creative exploits (e.g. hacking).

Say anything, charge for admission.

A more comprehensive and stable view emerges against the inconsistencies in their lobbying, by scrutinizing the purpose of regulatory frameworks. They are, by design, instruments of foresight, an architecture meant to de-risk uncharted waters of the future. Analogous to traffic laws, such as the highly controversial seat belt mandates that dramatically reduced future automobile-related fatalities, regulations are known for generating the playing field for genuine innovations by anticipating potential dangers and trying to stop harms. The also highly controversial Airbags were developed — an innovation driven by regulation — after the risk reduction of seat belts petered out. Ahead or behind?

In essence, regulatory structures are nothing if not future-focused. They are about drawing blueprints, modeling threats, calculating potential scenarios and ensuring responsible engineering for building within those parameters. Regulatory frameworks, contrary to being bound just to strict historical precedent, are vehicles of anticipation, fortified to embrace the future’s challenges with proactive insight.

While Moss’s sensational main stage performance pretends it can allude to past practices, it unfortunately spreads false and deceptive ideas about how regulators work and why. The giant tapestry of regulation around the world is woven with threads of anticipatory vigilance, made by people who spend their careers working on how to establish a secure trajectory forward.

Speaking of getting ahead of things it’s been decades already since regulators openly explained that BlackHat has racist connotations, and three years since major changes in the industry by leaders such as the NCSC.

…there’s an issue with the terminology. It only makes sense if you equate white with ‘good, permitted, safe’ and black with ‘bad, dangerous, forbidden’. There are some obvious problems with this…

Obvious problems, and yet the poorly-named BlackHat apparently hasn’t tried to get ahead of these problems. If they can’t self-regulate, perhaps they’re begging for someone with more foresight to step in and regulate them.


Now, don’t get me started on the history of cowboy hats and why disinformation about them in some of the most racist movies ever made (e.g. “between the 1920s and the 1940s”) does not in any way justify a tolerance for racism.

In this golden age of Westerns, good and evil weren’t color coded: there was plenty of room for moral ambiguity.

I say don’t get me started because as soon as someone says “but Hollywood said guy hats are white” I have to point them at the absolutely horrible propaganda that was actually created by Hollywood to form that association of a “White Hat” being “good”

…Hollywood took charge. In 1915, director D. W. Griffith adapted The Clansman as The Birth of a Nation, one of the very first feature-length films and the first to screen in the White House [and used by President Woodrow Wilson to restart the KKK]. Its most famous scene, the ride of the Klan, required 25,000 yards of white muslin to realize the Keller/Dixon costume ideas. Among the variety of Klansman costumes in the film, there appeared a new one: the one-piece, full-face-masking, pointed white hood with eyeholes, which would come to represent the modern Klan.

That’s how “room for moral ambiguity” in Westerns was codified by some wanting white to be “good”. BlackHat thus framing itself as a “bad” guy conference peddles racism straight out of 1915 Hollywood, and never in any good way. They really need to get ahead of things.

I mean practically any historian should be able to tell BlackHat that while in America the white hats typically were regarded as bad guys (very unlike the propaganda of Hollywood, promoting the KKK), an inverse to the evil lawlessness of American white hate groups was BLUE.

…lawless in its inception, [white hat wearing group] was soon dominated by the lawless element against which it was formed. The good citizens, instead of restoring law and order, became the servants and tools of disorder and mob violence. After two years of white-capism, another organization was formed to “down” the “White-caps,” called “Blue Bills.”

The Blue Bills, started by a doctor, in fact were all about law and order, rejecting the “anonymous hacker” methods of the white hats.

The Blue Bills did not consider themselves above the law; they didn’t consider themselves vigilantes, or heroes. They simply wanted to bring a stop to the terror that the White Caps had brought about…

But I digress… the BlackHat conference name is and always has been on the wrong side of history.