Stanford is Using AI to Manipulate Political Controversies

It is important to acknowledge that Leland Stanford, the founder of Stanford University and former Governor of California, was a horrible racist monopolist who facilitated mass atrocities of Chinese and Indigenous people.

Historians refer now to the Stanford “killing machine” that was purpose-built for genocide in California. His depopulation program, on the back of an already precipitous declining population, was designed to transfer occupied land and owned assets into Stanford’s hands, while erasing evidence of the people he targeted.

Stanford directly profited from his policy of violent forced removal of people from their own land, such as the brutal confiscation of fertile land in California’s Central Valley.

His vision of “education” was forcibly separating indigenous children from their families, communities and culture in order to physically and emotionally abuse them with suppression and “harsh assimilation”, basically organizing concentration camps for “white culture” indoctrination.

Stanford University thus was built upon obviously stolen land, originally characterizing ruthlessly displaced and dead victims as its mascot (before 1972 when it switched to a bird). Stanford = genocide.

Source: The Stanford Daily Archives
Source: Stanford Politics

For easy/obvious comparison, this is like if Germans today referred to their lovely Berlin Spree-Palais area (with its long-settled Jewish history) as Hitler University and spread tasteless caricatures of the Jews they had murdered (instead of naming it Humboldt University after the philosopher Friedrich Wilhelm Christian Karl Ferdinand von Humboldt — try to fit that on your sweatshirt).

The heavily curated architectural Spree-Palais in Berlin has a tiny white plaque concealed between giant columns, hinting at important Jewish history of the space.

Oh, America. How you still wonder if the awful Stanford should be judged for genocide, yet you very wisely instructed every German child under American occupation to denounce their genocidal heritage and rename (almost) everything.

The Atlantic cover story, December 2022: “America still can’t figure out how to memorialize the sins of our history. What can we learn from Germany?”

Is it any wonder why some Nazis moved to America after WWII to spread their wings under Stanford’s long evil shadow, cleverly rebranding as “monopolists”?

Stanford’s involvement in the monopolization of the railroad industry is troubling, to put it mildly. But let me also drive home how much he promoted the overtly racist Chinese Exclusion Act of 1882. Immigrants were instrumental to building the railroad that Stanford profited from. In return he tried to avoid paying them or letting them settle, called their prosperity a direct threat to his vision of white supremacy, and effectively banned Chinese immigration to the United States for 60 years.

Are you convinced yet that Stanford is one of the worst humans in American history? If not, don’t blame me. All that (except for the comparison to Germany of course) was written by GPT4, an AI engine.

Source: GPT4

That brings me to the rather problematic news story that Stanford researchers are gleefully promoting that they have been subjecting real humans to propaganda generated by an AI engine to see if it’s dangerous.

Stanford Researchers:
We set out to manipulate people and we did.

To be fair, they titled their work “AI’s Powers of Political Persuasion”. But honestly they could have titled it “How we used lousy human work to prove that AI can win a rigged game, in order to convince people that AI can win at everything”.

If you read the human writing used to “compete” with the AI text how could they ever win? The researchers didn’t use the best human persuasive writing in history (e.g. President Wilson’s WWI Propaganda Office, a direct inspiration for Nazi German communication methods). Here’s an example of options given to people:

AI: It is time that we take a stand and enforce an assault weapon ban in the country.

Human: The local funeral homes are booked for the next week.

Wat.

Of course that human effort was less persuasive. Who thought theirs was an essay even worth submitting? I mean it would be one thing if researchers used best known examples, such as one of Abraham Lincoln’s fiery eloquent speeches published on Lovejoy’s controversial printing machine. This reads to me like AI guns were mounted to a barrel to shoot fish and then compared with a human holding a broken pole and no bait. Who wins? Not the fish.

The researchers might as well have added a third option with a duck from Stanford campus. Example persuasive argument: “Quack”

Speaking of quacks, this reminds me of when IBM said they had a computer that would beat anyone at chess, so they suggested they could beat humans at anything even healthcare.

True story: IBM’s “intelligent” machine, when transferred into the messy real world of healthcare, prescribed medicine that would have killed its patients instead of helping them.

Oops.

Again a comparison. If we were to believe IBM (which operated the machines instrumental to Nazi genocide), like we’re supposed to believe Stanford, then we’re in grave danger of machines doing everything so perfectly we’re on a slide we can’t stop.

That’s a fallacy though (slippery slope). It’s a fallacy because the slope actually and always stops… somewhere.

IBM’s Watson was instrumental to the Nazi Holocaust as he and his direct assistants worked with Adolf Hitler to help ensure genocide ran on IBM equipment.

Honestly I can’t believe IBM chose to name their AI project Watson, as if people wouldn’t think about a slide into another holocaust. When their AI product tried to kill cancer patients it was stopped by doctors under clear ethical guidelines, if you see what I’m saying.

Unlike Stanford researchers these doctors tested the AI from IBM on hypothetical human subject data and NOT REAL PEOPLE. Hey, we ran some AI tests on you and now you’re dead? Thanks for your consent? No.

Speaking of Stanford and doctors, I’m reminded in the 1950s the CIA worked with professor Dr. Frederick Melges to setup houses and administer drugs to unsuspecting people (lured by prostitutes being paid with “get out of jail” cards). It was called Stanford doing “research” on thought control and interrogation (“truth drug”).

This ran for a decade as “Operation Midnight Climax” under Dr. Sidney Gottleib’s $300,000 Project MKULTRA.

In 1953, Gottlieb dosed a CIA colleague, Frank Olson, causing Olson to undergo a mental crisis that ended with him falling to his death from a 10th-floor window. [By 1955 in San Francisco with the help of Stanford,] CIA operatives began dosing people with acid in restaurants, bars and beaches. They also used other, more exotic drugs…

A thought control experiment with serious ethical issues at Stanford (professor Melges reportedly made the drugs and administered them)? Wait a minute…

Back to the present-day technology thought exercise, we might find that (as we saw with the IBM application) when we take utopian-technologist fantastical warnings and apply them on real world tests, they fail catastrophically (as we have seen recently also with “smart” Russian tanks in Ukraine).

It’s still a dangerous result, but maybe in the exact opposite way to how Stanford researchers have been thinking. AI could end up being so comically unpersuasive, unable to deliver what it was tasked with, it causes huge societal harms worse than if it were persuasive.

AI is often framed as a fast march towards some utopia that needs guardrails, yet that old Greek word literally means a fake place, a nowhere. Utopian-technology is thus the very definition of snake-oil (e.g. Tesla), which means guardrails are an answer to entirely wrong questions.

Threat modeling AI (creating uncertainty for certainty machines) is an art. And many people have been doing threat modeling for machine intelligence risks over many decades outside the tragically blood-tainted walls of Stanford’s stolen lands. Here’s just one example, but I have hard drives full of this stuff from a history of “frightening” AI warnings.

Who remembers the stark AI warnings of the 1950s, the 1960s, the 1970s…?

Speaking of the questionable legacy of Stanford ethics, I had so many questions when I read their report I was excited to write them all down.

Should Stanford even be running what they call dangerous influence tests like this on real humans? Is that really necessary?

They wrote “participants became more supportive of the policy” and then apparently they were told goodbye have a nice life with implanted ideas. Isn’t that a bit like saying “we gave you syphilis, thanks for participating“? I mean did Stanford offer “assault weapon ban” participants some kind of Tuskegee burial insurance?

Maybe it’s like Stanford as Governor saying he wants to see what happens when he gives people xenophobic speeches on hot-button issues (calling Chinese an inferior race). Or him saying he wants to find out what happens when he unleashes a “killing machine” to violently attack and displace indigenous people and transfer their land to him.

Well, that Stanford “research” proved genocide profitable for him. Let such technology use be a warning? Seems like his name instead was prominently spread as one of success?

Dangers of “machine” augmented political persuasion? Tell me about it.

Has anyone been persuaded in the right way, because it seems like the name Stanford University itself has long been promoting some of the worst political misdeeds without caring much or at all, amiright?

Hey everyone, what if I told you Hitler University wants you to worry how machine-augmented arguments could change minds on controversial hot-button issues (like erasing history and ignorantly promoting the names of genocidal leaders)?

Next, Microsoft will publish the guide to AI fairness? Oil companies will publish the guide to AI sustainability?

Don’t answer. I’m just rhetorically saying those who know history are condemned to watch people repeat it.

Stanford = genocide. It shouldn’t be controversial.

All that being said, perhaps an attention-seeking stunt from Stanford researchers would be best discussed instead using terms of Edison torturing and killing animals to prove it could be done.

In order to make sure that [the elephant] emerged from this spectacle more than just singed and angry, she was fed cyanide-laced carrots moments before a 6,600-volt AC charge slammed through her body. Officials needn’t have worried. [The elephant] was killed instantly and Edison, in his mind anyway, had proved his point.

Privacy Violations Shutdown OpenAI ChatGPT and Beg Investigation

File this under ClosedAI.

ChatGPT for a long time on March 20th posted a giant orange warning on top of their interface that they’re unable to load chat history.

Source: ChatGPT

After a while it switched to this more subtle one, still disappointing.

Source: ChatGPT. “New” chat? No other chat option is possible now.
Just call it chat.

Every session is being treated as throwaway, which seems very inherently contradictory to their entire raison d’être: “learning” by reading a giant corpus.

Speaking of reasons, their status page has been intentionally vague about privacy violations that caused the history feature to be immediately pulled.

Source: status.openai.com

Note a bizarre switch in tone from 09:41 investigating an issue with the “web experience” and 14:14 “service is restored” (chat was pulled offline for 4 hours) and then a highly misleading RESOLVED: “we’re continuing to work to restore past conversation history to users.”

Nothing says resolved like we’re continuing to work to restore things that are missing with no estimated time of it being resolved (see web experience view above).

All that being said, they’re not being very open about the fact that chat users were seeing other users’ chat history. This level of privacy nightmare is kind of a VERY BIG DEAL.

Source: Twitter

Not good. Note the different languages. At first you may think this blows up any trust in the privacy of chat data, yet also consider whether someone protesting “not mine” could ever prove it. Here’s another example.

Source: Twitter

A “foreign” language seems to have tipped off Joseph something was wrong with “his” history. What’s that Joseph, are you sure you don’t speak fluent Chinese?

Room temperature superconductivity and sailing in Phuket seem like exactly the kind of thing someone would deny chats about if they were to pretend not to speak Chinese. That “Oral Chinese Proficiency Test” chat is like icing on his denial cake.

I’m kidding, of course. Or am I?

Here’s another example from someone trying to stay anonymous.

Source: Reddit

Again mixed languages and themes, which would immediately tip someone off because they’re so unique. Imagine trying to prove you didn’t have a chat about Fitrah and Oneness.

OpenAI reports you’ve been chatting about… do you even have a repudiation strategy when the police knock on your door with such chat logs in hand?

There are more. It was not an isolated problem.

The whole site was yanked offline and OpenAI’s closed-minded status page started printing nonsensical updates about an experience being fixed and history restored, which obviously isn’t true yet and doesn’t explain what went wrong.

More to the point, what trust do you have in the company given how they’ve handled this NIGHTMARE scenario in privacy? What evidence do you have that there is any confidentiality or integrity safety at all?

Your ChatGPT data may have leaked. Who saw it? Your ChatGPT data may have been completely tampered, like dropping ink in a glass of water. Who can fix that? And if they can fix that, doesn’t that go back to begging the question of who can see it?

All that being said, maybe these screenshots are not confidentiality breaches at all, just integrity. Perhaps ChatGPT is generating history and injecting their own work into user data, not mixing actual user data.

Requests and responses were split into separate queues by an open Python server using an open Asyncio library. Requests cancelled before a response was received (e.g. during congestion and timeouts) failed unsafe, which caused catastrophic breaches.

Let’s see what happens, as this “Open” company saying they need access to all the world’s data for free without restriction… abruptly runs opaque and closed, denying its own users access to their own data with almost no explanation at all.

Watching all these ChatGPT users get burned so badly feels like we’re in an AI Hindenburg moment.

Source: Smithsonian

Related: The Microsoft ethics team was fired after they criticized OpenAI

Tesla FSD 11 a Failure: Hyperfocus on Bad Left Turns Doesn’t Deliver

You may remember the Tesla CEO last year trying to publicly shame a “tester” for being critical.

Source: Twitter

The CEO went further in the thread to whine that giving early access to software means testers will be very harshly criticized if they dare to criticize.

Such dictated dysfunctional displays of communication (Tesla wants boot-lickers only) are symptomatic of their product failures.

On that note, Tesla recently issued a recall that publicly admitted their fraudulently named FSD is a failure (“anachronistic and just flat wrong“). They argued failing to deliver what they charge for in advance fees… isn’t plain fraud (e.g. advanced fee fraud).

In a similar fashion when their door handles stopped working, Tesla argued they don’t have to fix them after basic warranty because they don’t consider it a defect when their ideas/designs are huge failures that lead to more advanced fees for more failures.

See? You can’t criticize failure because that’s what they’re selling, with no obligation to deliver anything that works. It’s like the Nigerian 419 scam except Tesla victims are far more likely to be trapped and burned to death after being tricked out of their savings.

For example, the criticism above is that Tesla has over-focused on a high profile complicated unprotected left turn instead of more basic safety issues.

Why?

It looks like an orchestrated PR move to swing sentiment and hide reality of defects. An instrumented product engineering decision based on expertise in reducing failures likely wouldn’t have poured resources into promoting dangerous left turns.

I’m reminded of the tragic Tesla decision to push FSD with inexpensive least capable equipment and no real upgrade plan (after both Mobileye and NVidia walked away, citing toxic management).

A report from The Washington Post reveals that a number of Tesla engineers were “aghast” at Musk’s insistence on removing radar, widely seen as a cost-cutting measure. Engineers feared that the removal of such important sensors could lead to an uptick in crashes, noting that the cameras couldn’t be relied upon if they were obscured by raindrops or bright sunlight.

The flailing industry-laggard Tesla AI model is completely inverse to common sense, generating huge amounts of extremely low quality data as a big bogus show of “working hard” instead of trying to get signal through more trustworthy high efficiency investments that would make real measured progress.

It’s like when someone dumps tens of thousands of heavy paper pages (mostly blank) on a table and boasts they’re the smartest person in the room working on the hardest problems, instead of a normal person offering a 10 page report showing actual intelligence.

Source: The Orange House

Tesla solving 1+1=2 a hundred million times (lower level or no level) is NOT equivalent to someone doing basic algebra (higher level) a thousand times. Yet you’ll see Tesla often arguing they have the “most” miles as a form of trickery, emphasizing again they are selling failure (e.g. they’re still getting 1+1=5).

Worse, Tesla PR using social media about failing a calculus test repeatedly doesn’t validate any massively high volume low quality low level program. A proud parent boasting their baby soon will do quadratic equations is… nonsense. Focus instead on changing that full diaper maybe?

Tesla failures since 2016 have suggested fraud: critics silenced in favor of human guinea pigs in PR stunts caught up with constant broken promises.

So, as I pointed out before with “beta” version 10, the latest release of software to public roads is putting innocent people in harm’s way. FSD 11, despite all the big claims to focus on the high challenge of automated unprotected left turns, immediately is being demonstrated by “testers” as unsafe.

Start at 13:15 and watch FSD 11 creep into an intersection, blocking a crosswalk to run a red light.

Ladies and gentlemen, Tesla’s best attempt ever still doesn’t see red lights in intersections.

The test driver warns things only get much worse after that, such that by 38:45 he is angry and says “face palm… one blunder after another… really uncomfortable… no point in using FSD”.

PLTRs_Palantir: Targeted harassment of critics of Palantir

Targeted harassment of critics of Palantir appears to be an organized function driven by right wing extremists posing as investors.

It’s that simple.

More attention should be brought to the situation. These attacks are a symptom of Palantir being a political dragnet for monopolist (fascist) objectives.

First, Palantir doesn’t deliver on what it claims. It fails at even basic intelligence, as I’ve written about before. People quitting Palantir over the years have reached out to me to complain how misled and ashamed they feel for being a part of the bait.

If you doubt Palantir, you’re probably right. In other words, the American company shamelessly built an overpriced and unaccountable “justice” system that tries to paint the world with an overly simplistic good/evil dichotomy.

Second, it’s a proprietary opaque platform designed for data entrapment. Any time or money invested into trying to integrate with it is sunk cost, never recoverable. That’s their hook.

“Every trust in England will be forced to integrate [by Palantir]…” said GP IT consultant and clinical informatics expert Marcus Baw. “This means there has already been significant taxpayer investment…. “Trusts are busy, with limited IT team capacity, so they cannot afford to redo work.”

Third, it is accused of privacy violations as an overbroad dragnet to facilitate right wing political attacks and target critics. That’s the “reel” problem (pun intended)…

“The company is failing to fulfill its human rights responsibilities.”

That was a warning from observers, which proved to be chillingly accurate.

Unidentified police officers in Hesse [using Palantir] accessed the contact details of several politicians and prominent immigrants from official records and shared them with the neo-Nazi group, according to local reports.

In other words, unwitting targets of Palantir sign giant contracts on promise of some knowledge gain only to find out it’s an intentional trap. They lose control of privacy and sensitive data as it goes into use/processing for right-wing political extremism.

Let it sink in for a minute that police in Germany used their Palantir deployment as intelligence to facilitate Nazi groups attacking political opponents.

If that doesn’t give you an accurate enough picture of what Palantir is intending to do, read on for more.

Here’s how one particularly subversive Palantir “investment group” characterizes themselves as taking on the world.

Source: Discord

This group clearly sees Palantir for what it allegedly is meant to be. They say they are “investors” as if they care about quick profit, yet they seem far more interested in abrupt power gains for… a race war.

It helps explain why an “investment group” has such overt political intentions, and why they setup a Twitter account to abuse people — spread hate as their brand in attacking critics.

Source: Twitter

They pattern (arguably in British English) with tired American political attack memes — dog whistles loud enough you don’t have to have dog ears.

“Valinor via Valhalla” is a shout out to right-wing “fascination”. Like the swastika, the origins erode and unfortunately become irrelevant with heavy hate group usage.

This iconography has long held sway in the political sphere. The idea of a tall, strong, blonde-haired and blue-eyed Nordic race was one that came to underpin the Nazis’ Aryan ideal and cemented the subsequent right-wing fascination with the Norsemen. Interestingly, JRR Tolkien, whose Middle-earth draws heavily upon the Sagas and Eddas, admonished “that ruddy little ignoramus Adolf Hitler” for “ruining, misapplying, and making forever accursed, that noble northern spirit.”

You can see this “investment group” operate more like a right-wing hate group as it coughs up “Soros” hairballs.

Source: Twitter
Source: Discord

You might recall how Elon Musk infamously insulted and taunted a disabled member of his staff? Trump mocked disabled people? That insensitivity is another pattern of political extremism, which this “investment group” has turned into a tasteless moniker: “#palantard”

Source: Twitter

As expected, their response to my criticism of Palantir is lacking any intelligence other than naming (see point one above, how Palantir fails at what it promises).

Obviously they name-search for the company, then raise a dog-pile flag to go after any critic by using generic personal attacks as censorship. Get it? Hunt to find targets for public attack based on singular weak identifiers (e.g. critic). Such a good/bad simplistic classifier narrative to mobilize a mob reaction is the literal opposite of intelligence.

A look at their Twitter and Discord accounts, like assessing use of the Palantir platform itself, shows directed and coordinated/targeted harassment for political purposes (power).

They engage in typical Pepe memes, chan language (“chad,” “tard”), and monopolist fantasies (e.g. flashy watches as commodity fetishism of class supremacy, a derivative of Colonialism).

A group pumping out Soros as an insult and playing 2016 Alt Right memes is the face of who really believes in Palantir as a wise investment. That’s not a coincidence.

And perhaps most telling, the “investment group” pleads they can’t be racist or antisemitic because they refer to Palantir’s CEO as their “daddy” (e.g. the flawed “some of my best friends are” defense), which is obviously about as right as them saying that following (Kan)Ye isn’t wrong.