Category Archives: Food

ChatGPT is Fraud, Court Finds Quickly and Sanctions Lawyer

For months now I have been showing lawyers how ChatGPT lies, and they beg and some plead for me to write about it publicly.

“How do people not know more about this problem” they ask me. Indeed, how is ChatGPT failure not front page news, given it is the Hindenburg of machine learning?

And then I ask myself how do lawyers not inherently distrust ChatGPT — see it as explosive garbage that can ruin their work — given the law has legendary distrust in humans and a reputation for caring about tiny details?

And then I ask myself why I am the one who has to report publicly on ChatGPT’s massive integrity breaches? How could ChatGPT be built without meaningful safety protections? (Don’t answer that, it has to do with greedy fire-ready-aim models curated by a privileged few at Stanford; a rush to profit from stepping on everybody to summit an artificial hill created for a evil new Pharoah of technology centralization).

All kinds of privacy breaches these days will result in journalists banging away on keyboards. Everyone writes about them all the time (two decades after regulation forced their hand, 2003 breach disclosure laws started).

However, huge integrity breaches seem to be left comparatively ignored even when harms may be greater.

In fact, when I blogged about the catastrophic ChatGPT outage practically every reporter I spoke with said “I don’t get it”.

Get what?

Are integrity breaches today somehow not as muckrackworthy as back in The Jungle days?

The lack of journalist attention to integrity breaches has resulted in an absurd amount of traffic coming to my blog, instead of people reading far better written stuff on the NYT (public safety paywall) or Wired.

I don’t want or need the traffic/attention here, yet I also don’t want people to be so ignorant of the immediate dangers they never see them before it’s too late. See something, say…

And so here we are again, dear reader.

A lawyer has become a sad casualty of fraud known as the OpenAI ChatGPT. An unwitting, unintelligent lawyer has lazily and stupidly trusted this ChatGPT product, a huge bullshit generator full of bald-faced lies, to do their work.

The lawyer asked the machine to research and cite court cases, and of course the junk engineering… basically lied.

The court was very displeased with reviewing lies, as you might guess. Note the conclusion above to “never use again” the fraud of ChatGPT.

Harsh but true. Allegedly the lawyer asking ChatGPT for answers decided it was to be trusted because it was asked if it could be trusted. Hey witness, should I believe you? Ok.

Apparently the court is now sanctioning the laziest lawyer alive, if not worse.

A month ago when presenting findings like this I was asked by a professor how to detect ChatGPT. To me this is like asking a food critic how can they detect McDonalds. I answered “how do you detect low quality” because isn’t that the real point? Teachers should focus on quality output, and thus warn students that if they generate garbage (e.g. use ChatGPT) they will fail.

The idea that ChatGPT has some kind of quality to it is the absolute fraud here, because it’s basically operating like a fascist dream machine (pronounced “monopolist” in America): target a market to “flood with shit” and destroy trust, while demanding someone else must fix it (never themselves, until they eliminate everyone else).

Look, I know millions of people willingly will eat something called a McRib and say they find it satisfying, or even a marvel of modern technology.

I know, I know.

But please let us for a minute be honest.

A McRib is disgusting and barely edible garbage, with long term health risks.

Luckily, just one sandwich probably won’t have many permanent effects. If you step on the scale the next day and see a big increase, it’s probably mostly water. The discomfort will likely cease after about 24 hours.

Discomfort. That is what nutrition experts say about eating just one McRib.

If you never experienced a well made beef rib with proper BBQ, that does not mean McDonalds has achieved something amazing by fooling you into paying them for a harmful lie that causes discomfort before permanent harmful effects.

…nausea, vomiting, ringing in the ears, delirium, a sense of suffocation, and collapse.

This lawyer is lucky to be sanctioned early instead of disboweled later.

Sorry, meant disbarred. Autocorrect. See the problem yet?

Diabetes is a terrible thing to facilitate, as we know from what happened from people guzzling McDonalds instead of real food and then realizing too late their life (and healthcare system) is ruined.

The courts must think big here to quickly stop any and all use of ChatGPT, with a standard of integrity straight out of basic history. Stop those avoiding accountability, who think gross intentional harmful lies for profit made by machines (e.g. OpenAI) should be prevented or cleaned up by anyone other than themselves.

The FDA, created because of reporting popularized by The Jungle, didn’t work as well as it should. But that doesn’t mean the FDA can’t be fixed to reduce cancer in kids, or that another administration can’t be created to block the sad and easily predictable explosion in AI integrity breaches.

FDA Loophole for American Candy Gives Cancer to Kids

A NYT report highlights something I’ve been seeing a lot lately in American generative AI logic.

…many chemicals are approved under a provision known as Generally Recognized As Safe, which states that a food additive can forego review by the F.D.A. if it has been deemed safe by “qualified experts.”

Qualified experts is an obviously shady phrase that can enable private companies to self-regulate, a political process meant to directly corrupt safety for profits.

If a doctor at Stanford will go on the record saying smoking is good for you, in exchange for him getting lavish gifts, American tobacco companies will absolutely use that to deny science. True story.

So too with American candy companies, which seem to use giant safety regulation loopholes to act like cancer isn’t the predictable outcome of known carcinogens they serve children.

One point of contention is that the vast majority of the research on these additives has been done in animals because it is difficult (and unethical) to conduct toxicology research in humans. As a result, “It’s impossible to say that eliminating Red 3 or titanium dioxide from the American diet will reduce the number of people who suffer from cancer by a certain amount with total precision,” Mr. Faber said. “But anything that we can do to reduce our exposure to carcinogens, whether known or suspected carcinogens, is a step in the right direction.”

This is probably a good time to remember that the FDA was created as a reaction to labor abuse complaints in Chicago, as captured in The Jungle. Instead of directly improving rights for workers, the government sought to improve perception of the food quality from places with exposed inhumane working conditions.

At some point these discussions should start to push forward a realization that America often seems to embrace obvious graft and oppose quality, even in cases of children getting cancer.

One rotten apple spoils the bunch is a saying that seems to entirely escape anti-regulation zealots tying the hands of the FDA. And this behavior is having a profound impact on generative AI learning, which parrots inane ideas like science is evasive and pluralist because (hypothetically speaking) some candy oligarch sent her kid to medical school to keep them on family payroll as a dissenting “qualified expert”.

Related: Lege packt aus: Miese Maschen im Snack-Regal

How Fixing Howitzers in Ukraine is Like Baking a Cake

“From America with love” is written on a Ukrainian M777 “three axes” howitzer to be fired at Russians.

When I wrote my first book in 2012, I pitched the publisher on cooking recipes for cloud security.

My vision was that one page would describe how to make an historic meal (such as Royal Navy spotted dick) and then the rest of the chapter would be cloud technical steps (such as how to setup secure remote administration).

I even presented a test chapter for the RSA Conference in China on how to grill the perfect hamburger, as a recipe for cloud encryption and key management.

Things didn’t turn out quite like I had expected, as the publisher asked to change the title to virtualization, drop the food recipes, and insert a DVD. It felt like preparing a gourmet vegan dessert and being told to stick to the meat and potatoes.

*Sigh*

Nonetheless in my mind cooking remains a powerful way to convey the relationship between technology and knowledge.

Everybody eats.

Food automation tends to be disgusting, even causing illness. Whereas technology augmentation in human cooking, using recipes for quality control and governance, will produce the best possible meal.

Perhaps the canonical example I hear all the time in AI ethics circles… if you brought a robot into your home and told it to prepare you a steak dinner, should you be surprised if later you can’t find the dog?

Hey, I didn’t say the robot was Chinese. Stop thinking so simply.

Microsoft management clearly didn’t understand such basic anthropological tenets of technology use. The big news, hopefully surprising nobody, is illness has forced them to cancel a massively funded VR program.

The personnel demoing the tech appear to be using a variant of Microsoft HoloLens. The government recently halted plans to buy more “AR combat goggles” from Microsoft, instead approving $40 million for the company to develop a new version. The reversal came after discovering that the current version caused issues like headaches, eyestrain and nausea.

Such a waste of time and money to find out what is easily predicted.

Soldiers “cited IVAS 1.0’s poor low-light performance, display quality, cumbersomeness, poor reliability, inability to distinguish friend from foe, difficulty shooting, physical impairments and limited peripheral vision as reasons for their dissatisfaction,” per the DOT&E assessment. The Army knows that IVAS 1.0 is something of a lemon [yet] still plans on fielding the 5,000 IVAS 1.0 units it’s currently procuring from Microsoft at $46,000 a pop to training units and Army Recruiting command for a total price tag of $230 million.

It’s like reading some people got sick and then discovered their taco MRE bag wasn’t really a taco, just sugar and cornmeal drenched in preservatives and artificial taco flavors.

VR from Microsoft sounds like the hardtack (dry “cracker”) of combat goggles. A real bargain at $230 million.

See-through augmentation measured on efficiency and minimal interference is a whole different story, as it avoids all the foundational problems of automation (e.g. where to get flavor, or actual useful nutrition from).

Google glass really blew it on this point. They could have developed an HUD for highly technical work like repairing machines with both hands.

Of course Google didn’t think like this because their engineers all went straight from elite schools to sitting in a gourmet cafeteria eating free lunches and talking mostly about their exotic vacations.

They’re in a virtual world, the opposite of what’s required for knowledge, let alone innovation. And that’s why their products depend on finding people who really live, who have daily struggles and needs in a real world, to tell them what to engineer.

That’s all background to the main point here that howitzers in Ukraine are proving today what everyone should have been working on for at least the last decade: cooking.

DARPA’s training demos use something more pedestrian: cooking. Dr. Bruce Draper, the program’s manager, describes it as the ideal proxy task. “[Cooking is] a good example of a complex physical task that can be done in many ways. There are lots of different objects, solids, liquids, things change state, so it’s visually quite complex. There is specialized terminology, there are specialized devices, and there’s a lot of different ways it can be accomplished. So it’s a really good practice domain.” The team views PTG as eventually finding uses in medical training, evaluating the competency of medics and other healthcare services.

First you bake a cake together as a team using augmented vision… then you destroy invading armies with it.

Using phones and tablets to communicate in encrypted chatrooms, a rapidly growing group of U.S. and allied troops and contractors is providing real-time maintenance advice — usually speaking through interpreters — to Ukrainian troops on the battlefield. In a quick response, the U.S. team member told the Ukrainian to remove the gun’s breech at the rear of the howitzer and manually prime the firing pin so the gun could fire. He did it and it worked.

Delicious.

I’m not going to claim credit for this obvious future of technology based on ancient wisdom, given there are so many children’s tales saying the same thing.

Ratatouille is probably my favorite, easily digested in movie format.

The real kicker to the howitzer example is the technical teams spell out very precisely in life and death context where augmentation works best and where it fails (hint: Blockchain is a disaster).

As the U.S. and other allies send more and increasingly complex and high-tech weapons to Ukraine, demands are spiking. And since no U.S. or other NATO nations will send troops into the country to provide hands-on assistance — due to worries about being drawn into a direct conflict with Russia — they’ve turned to virtual chatrooms.

I use virtual chatrooms so much I forgot for a minute that they’re virtual.

The Ukrainian troops are often reluctant to send the weapons back out of the country for repairs. They’d rather do it themselves, and in nearly all cases — U.S. officials estimated 99% of the time — the Ukrainians do the repair and continue on. …Ukrainians can now put the split weapon back together. “They couldn’t do titanium welding before, they can do it now,” said the U.S. soldier, adding that “something that was two days ago blown up is now back in play.”

I love this SO MUCH. Right to Repair in a nutshell. Technology dramatically enhances developing markets by sharing knowledge like how to restore that technology in the field.

It’s the awesome Dakar Malle model of efficiency and sustainability that all technology should be put through, instead of lionizing the biggest waste teams.

And now for the main point:

Sometimes video chats aren’t possible. “A lot of times if they’re on the front line, they won’t do a video because sometimes (cell service) is a little spotty,” said a U.S. maintainer. “They’ll take pictures and send it to us through the chats and we sit there and diagnose it.”

Visual diagnosis in real time to bake a highly complicated cake. Including translation for chefs representing 17 nations in a small kitchen.

As they look to the future, they are planning to get some commercial, off-the-shelf translation goggles. That way, when they talk to each other they can skip the interpreters and just see the translation as they speak, making conversations easier and faster.

And I warned you about bockchain.

The expanse of weapons and equipment they’re handling and questions they’re fielding were even too complicated for a digital spreadsheet — forcing the team to go low-tech. One wall in their maintenance office is lined with an array of old-fashioned, color-coded Post-it notes, to help them track the weapons and maintenance needs.

Hope that’s clear. Writing a big blog post about how to share knowledge in the future is hard. Not as hard as a book, obviously, but I definitely could use some augmentation right now

More than anything it’s clear to me without government funded research teams, many tech companies would be utterly and completely lost in expensive dead end navel gazing.

DARPA is asking for developing recipes that really were needed a decade ago, based on assessment of hunger they see right now. While it’s fashionable to call this future thinking to avoid blame, in reality it’s being less ignorant about the present troubles.

Let the Russians desperate for a Chinese MRE eat cake instead, a delicious one right out of the howitzer.

Or I believe Molotov in WWII would have called them “bread baskets“.

Vyacheslav Molotov claimed in 1939 the Soviet Union was not dropping bombs on Finland, just airlifting food. The Finns thereafter called RRAB-3 cluster bombs “Molotov’s bread basket” (Molotovin leipäkori) and named their improvised incendiary device (used to counter Soviet tanks) a Molotov cocktail — “a drink to go with the food.”

Tesla FSD Caused Crash of 8 Cars on Interstate

There’s yet again evidence of Tesla having expanding critical safety failures, by design.

If you read the already shocking number of complaints to the NHTSA by new Tesla owners, hundreds cite a terrifying sudden unexplained braking event.

Here’s typical language reported for years, as if causing crashes has just been Tesla’s way to learn the crimes they can get away with.

Twice today my model 3 came to a hault when using cruise control on the highway. The second time everything in my car was thrown into the front seat/windshield as i was going 80mph and I took over but was at 30mph by then as it happened so fast .. WTH is going on as I could have been killed and/or killed others.

Note the last sentence because Tesla’s official response has been that they aren’t listening.

In fact, “ghost brakes” have plagued Tesla for a long time. The NHTSA survey [based on reports of Tesla crashes and injuries] covers about 416,000 vehicles produced in 2021 and 2022. Tesla said there have been no reports of crashes or injuries resulting from the issue.

You might think what Tesla said in response sounds unbelievable. And you’d be right.

“No reports” is used as an intentional logical fallacy known as “no true Scotsman“. Even when you crash they might say but it wasn’t a really big crash. And if you have a big crash they might say but plaintiffs weren’t really harmed. And if someone dies they might say but really not many people were harmed.

How can this “plague” of life threatening engineering failures, potential for catastrophic widespread crashes, be ignored by Tesla for so long?!

Sadly the answer is simple, aside from the logical fallacy tactics.

The Tesla CEO is a science denier.

On March 19, 2020 the Tesla CEO used his Twitter account to announce America was headed toward “zero new cases” of COVID-19 by the end of April. At the end of April case counts spiked upwards of 20,000 proving him dangerously wrong. But did he accept science? No, he dug himself deeper into fantasy beliefs and mysticism.

The CEO used his bully pulpit to convince people to ignore warnings about COVID-19 and keep going to work, argued against vaccines and launched baseless attacks on public servants to diminish their ability to provide safety during the pandemic.

He pushed hard for disinformation to be allowed, denying harms while facilitating unnecessary suffering and death.

What a recently exposed report shows is that every Tesla on the road is indeed a result of intentional safety denial and thus a threat to anyone else around them.

A driver told authorities that their Tesla’s “full-self-driving” software braked unexpectedly and triggered an eight-car pileup in the San Francisco Bay Area last month that led to nine people being treated for minor injuries including one juvenile who was hospitalized, according to a California Highway Patrol traffic crash report. […] Tesla Model S was traveling at about 55 mph and shifted into the far left-hand lane, but then braked abruptly, slowing the car to about 20 mph. That led to a chain reaction that ultimately involved eight vehicles to crash, all of which had been traveling at typical highway speeds.

It takes a special kind of criminal to repeatedly raise prices for a product falsely marketed as a road safety feature, when year after year it makes everyone far less safe.

A video posted recently by a FSD user demonstrates the software as an embarrassingly less safe, more stressful ride.

Man, my heart rate is definitely higher during this drive than the average normal drive…

What should come to mind here is Tesla FSD has always been a “fraud” or “snake oil” and public roads should have been protected from it.

As the Center for Auto Safety puts it:

…what’s the threshold number of injuries and deaths and cars driving stupidly that we have to see before NHTSA finds that there’s some sort of defect in these cars?

Calling the bug riddled Tesla FSD a safety feature is like calling meal worm tacos a cure for COVID-19.

Given how bad Tesla engineering quality has been, if it was food… it would be mostly bugs.

Perhaps the regulators soon will be coming to the realization Tesla has always treated its customers like crash test dummies and investors like an ATM.