Altman’s OpenAI and WorldCoin Might Just Be Lying to Everyone on Purpose

Lately when people ask me about OpenAI’s ChatGPT just lying so brazenly, intentionally misstating facts and fabricating events, I explain that’s likely the purpose of the tool. It aims only to please, not ever to be “right” in any real sense let alone have any integrity.

ChatGPT lies and lies and lies. When caught and you ask it to stop lying, it suggests removing attempts to be factual from its responses. This is like asking a waiter for soup and being presented with an obviously unsafe/dirty bowl. If you tell them to not violate health codes the waiter offers a “clean” bowl filled with soap and water. Inedible puts it mildly, since egregious code violation is more like it.

Over time the AI company has been getting worse, based on extensive direct experiences while trying to help law firms investigate fraud among the platforms offering chat services. Lately the ChatGPT software, for example, has tried to convince its users that the U.S. Supreme Court in fact banned the use of seatbelts in cars due to giant court cases in the 1980s… cases that SIMPLY DO NOT EXIST for a premise that is an OBVIOUS LIE.

I hate calling any of this hallucinations because at the end of the day the software doesn’t understand reality or context so EVERYTHING is says is a hallucination and NOTHING is trustworthy. The fact that it up-sells itself being “here” to provide accuracy, while regularly failing to be accurate and without accountability, is a huge problem. A cook who says they are “here” to provide dinner yet can NOT make something safe to eat is how valuable? (Don’t answer if you drink Coke).

Ignoring reality while claiming to have a very valuable and special version of it is appearing to be a hallmark of the Sam Altman brands, building a record of unsafely rushing past stop signs and ignoring red lights like he’s a robot made by Tesla making robots like Tesla.

He was never punished for those false statements, as long as he had a new big idea to throw to rabid investors and a credulous media.

Fraud. Come on regulators, it’s time to put these charlatans back in a box where they can’t do so much harm.

Fun fact, the CTO of OpenAI shifted from being a Goldman Sachs intern to being “in charge” of a catastrophically overpromised and underdelivered unsafe AI product of Tesla. It’s a wonder she hasn’t been charged with over 40 deaths.

Here’s more evidence on the CEO, from the latest about his WorldCoin fiasco:

…ignored initial order to stop iris scans in Kenya, records show. …failed to obtain valid consent from people before scanning their irises, saying its agents failed to inform its subjects about the data security and privacy measures it took, and how the data collected would be used or processed. …used deceptive marketing practices, was collecting more personal data than it acknowledged, and failed to obtain meaningful informed consent…

Sam Altman runs a company that failed to stop when ordered to do so, continued to operate immorally and violate basic safety, as if “never punished”.

This is important food for thought, especially given OpenAI has lately taken to marketing wild, speculative future-leaning promises about magically achieving “Enterprise” safety certifications long before it has done the actual work.

Trust them? They are throwing out a lot of desperate-to-please big ideas for rabid investors, yet there’s still zero evidence they can be trusted.

Perfect example? In their FAQ about privacy it makes a very hollow-sounding yet eager-to-please statement that they have been audited (NOT the same as stating they are compliant with requirements):

Fundamentally, these companies seem to operate as though they can be above the law, peddling intentional hallucinations to placate certain people into being trapped by a “nice and happy” society in the worst ways possible… reminiscent of drug dealers peddling political power-grabs and fiction addiction.

“In the United States there are no Peugeot or Renault cars!”

Here’s how a Peace Corp veteran tries to illustrate the presence and effects of French colonialism.

I shall never forget a Comorian friend’s reaction to his first trip to the United States. Arriving back in Moroni, rather than enthusiastically describing skyscrapers, fast food, and cable TV, his singular observation was that in the United States there are no Peugeot or Renault cars! This piece of technology, essential to Comorian life, had always been French, and this Comorian was shocked to learn that there were alternatives.

Why would anyone ever expect someone with access to daily delicious fresh fruit and fish to ever enthusiastically describe… fast food?

Yuck!

Skyscrapers?

Wat.

The French passed draconian laws and did worse to require colonies (especially former ones) to only buy French exports. I get it, yup I do. So Comorians lived under artificial monopoly, and only knew French brands. Kind of like how the typical American who visits France says “I need a coffee, where’s the Starbucks?” Or the American says “I need to talk with my family and friends, where’s the Facebook?”

Surely being forced by frogs into their dilapidated cars, however, still rates quite far above entering into a health disaster of American fast food. A Comorian losing access to the delicious, locally made slow high nutrition cuisine is nightmare stuff.

But seriously…

“This piece of technology, essential to Comorian life” is a straight up Peace Corp lie about cars.

Everyone (especially the bumbling French DGSE) knows a complicated expensive cage on four wheels is unessential to island life, inefficient, and only recently introduced. Quality of life improves inversely to the number of cars on a single lane mountain road.

Motorbikes? That’s another story entirely, as an actual “unexpected” power differential, which the Israelis, Afghans, Chinese and lately Ukranians very clearly know far too well (chasing British and Japanese lessons).

Go home Peace Corp guy, your boring big car ride to a lifeless big skyscraper box filled with tastless Big Mac is waiting. Comorians deserve better. American interventionists should try to improve conditions locally and appropriately, not just drive former colonies so far backwards they start missing French cars.

Deception From a BlackHat Conference Keynote About Regulating AI

One note from this years’ typical out-of-tune “BlackHat” festivities in Las Vegas has struck a particularly dissonant chord — a statement that regulators habitually trail behind the currents of innovation.

A main stage speaker expressed strange sentiment and stood in contrast to the essence of good regulation itself, which strives to act as an anticipatory framework for an evolving future (like how regulators of a car such as brakes and suspension are designed for the next turn, and not just the last one).

At the conference’s forefront a keynote gave what the speaker wanted people to believe as a historical pattern: governmental entities often mired in reactionary postures unable to get “ahead” of emerging technologies.

Unlike in the time of the software boom when the internet first became public, Moss said, regulators are now moving quickly to make structured rules for AI.

“We’ve never really seen governments get ahead of things,” he said. “And so this means, unlike the previous era, we have a chance to participate in the rule-making.”

Uh, NO. As much as it’s clear the keynote was attempting to generate buzz, excitement from opportunity, its premise is entirely false. This is NOT unlike any previous era in the situation as stated. In fact, AI is as old if not older than the Internet.

If what Moss said were even remotely true about regulators moving quickly to get ahead of AI, unlike other technology, then the present calendar year should read 1973 NOT 2023.

Regulations in fact were way ahead of the Internet and often credited for its evolutionary phases, for better or worse, making another rather obvious counter-point.

  • Communications Act of 1934 reduced anti-government political propaganda (e.g. Hearst’s overt promotion of Hitler). It created the FCC to break through corporate-monopolistic grip over markets from certain extreme-right large broadcast platforms and promoted federated small markets of (anti-fascist) publishers/broadcasters… to help with the anticipated emerging conflict with Nazi Germany.
  • Advanced Research Projects Agency Network, funded by the U.S. Department of Defense, in 1969 created the “first workable prototype of the Internet” realizing a 1940s vision of future decentralized peering through node-to-node communications.
  • Telecommunications Act of 1996 inverted 1934 regulatory sentiment by removing content safety prohibiting extreme-right propaganda (predictably restarting targeted ad methods to manipulate sentiment, even among children), and encouraged monopolization through convergence of telephones, television and Internet.

I blame Newt “Ideas” Gringrich for the extremely heavy dose of crazy that went into that 1996 Act.

You hear Gingrich’s staff has these five file cabinets, four big ones and one little tiny one. No. 1 is ‘Newt’s Ideas.’ No. 2, ‘Newt’s Ideas.’ No. 3, No. 4, ‘Newt’s Ideas.’ The little one is ‘Newt’s Good Ideas.’

And I did say better AND worse, as some regulators warned about in 1944.

…inventions created by modern science can be used either to subjugate or liberate. The choice is up to us. […] It was Hitler’s claim that he eliminated all unemployment in Germany. Neither is there unemployment in a prison camp.

But perhaps even more fundamentally, Moss overlooks the fundamental nature of regulation — an instrument that historically has projected forward, preemptively calibrating guidelines to steer innovation and get ahead of things by navigating toward things like responsible or moral avenues to measure progress.

Moss’s viewpoint can be likened to a piece of a broader history of “BlackHat” disinformation tactics, which ironically wants to send out a call for proactive strides in outlining rules for the burgeoning realm of artificial intelligence because regulators are behind the times, while simultaneously decrying regulators who are doing too much too soon by not leaving unaccountable play space for the most creative exploits (e.g. hacking).

Say anything, charge for admission.

A more comprehensive and stable view emerges against the inconsistencies in their lobbying, by scrutinizing the purpose of regulatory frameworks. They are, by design, instruments of foresight, an architecture meant to de-risk uncharted waters of the future. Analogous to traffic laws, such as the highly controversial seat belt mandates that dramatically reduced future automobile-related fatalities, regulations are known for generating the playing field for genuine innovations by anticipating potential dangers and trying to stop harms. The also highly controversial Airbags were developed — an innovation driven by regulation — after the risk reduction of seat belts petered out. Ahead or behind?

In essence, regulatory structures are nothing if not future-focused. They are about drawing blueprints, modeling threats, calculating potential scenarios and ensuring responsible engineering for building within those parameters. Regulatory frameworks, contrary to being bound just to strict historical precedent, are vehicles of anticipation, fortified to embrace the future’s challenges with proactive insight.

While Moss’s sensational main stage performance pretends it can allude to past practices, it unfortunately spreads false and deceptive ideas about how regulators work and why. The giant tapestry of regulation around the world is woven with threads of anticipatory vigilance, made by people who spend their careers working on how to establish a secure trajectory forward.

Speaking of getting ahead of things it’s been decades already since regulators openly explained that BlackHat has racist connotations, and three years since major changes in the industry by leaders such as the NCSC.

…there’s an issue with the terminology. It only makes sense if you equate white with ‘good, permitted, safe’ and black with ‘bad, dangerous, forbidden’. There are some obvious problems with this…

Obvious problems, and yet the poorly-named BlackHat apparently hasn’t tried to get ahead of these problems. If they can’t self-regulate, perhaps they’re begging for someone with more foresight to step in and regulate them.


Now, don’t get me started on the history of cowboy hats and why disinformation about them in some of the most racist movies ever made (e.g. “between the 1920s and the 1940s”) does not in any way justify a tolerance for racism.

In this golden age of Westerns, good and evil weren’t color coded: there was plenty of room for moral ambiguity.

I say don’t get me started because as soon as someone says “but Hollywood said guy hats are white” I have to point them at the absolutely horrible propaganda that was actually created by Hollywood to form that association of a “White Hat” being “good”

…Hollywood took charge. In 1915, director D. W. Griffith adapted The Clansman as The Birth of a Nation, one of the very first feature-length films and the first to screen in the White House [and used by President Woodrow Wilson to restart the KKK]. Its most famous scene, the ride of the Klan, required 25,000 yards of white muslin to realize the Keller/Dixon costume ideas. Among the variety of Klansman costumes in the film, there appeared a new one: the one-piece, full-face-masking, pointed white hood with eyeholes, which would come to represent the modern Klan.

That’s how “room for moral ambiguity” in Westerns was codified by some wanting white to be “good”. BlackHat thus framing itself as a “bad” guy conference peddles racism straight out of 1915 Hollywood, and never in any good way. They really need to get ahead of things.

I mean practically any historian should be able to tell BlackHat that while in America the white hats typically were regarded as bad guys (very unlike the propaganda of Hollywood, promoting the KKK), an inverse to the evil lawlessness of American white hate groups was BLUE.

…lawless in its inception, [white hat wearing group] was soon dominated by the lawless element against which it was formed. The good citizens, instead of restoring law and order, became the servants and tools of disorder and mob violence. After two years of white-capism, another organization was formed to “down” the “White-caps,” called “Blue Bills.”

The Blue Bills, started by a doctor, in fact were all about law and order, rejecting the “anonymous hacker” methods of the white hats.

The Blue Bills did not consider themselves above the law; they didn’t consider themselves vigilantes, or heroes. They simply wanted to bring a stop to the terror that the White Caps had brought about…

But I digress… the BlackHat conference name is and always has been on the wrong side of history.

Suspicious Errors and Omissions in Inflection’s Generative PI.AI Privacy Policy and TOS

I was asked to have a look at the generative AI site of PI.AI and right away noticed something didn’t flow right.

The most prominent thing on that page to my eye is this rather agressive phrase:

By messaging Pi, you are agreeing to our Terms of Service and Privacy Policy.

It’s like meeting a new person on the street who wears a big yellow cowboy hat that says if you speak with them you automatically have agreed to their terms.

Friendly? Nope. Kind? Definitely NOT.

How’s your day going? If you reply you have agreed to my terms and policies.

Insert New Yorker cartoon above, if you know what I mean.

Who is PI.AI? Why do they seem so agro? There’s really nothing representing them, showing any meaningful presence on the site at all. Surely they are making a purposeful omission.

Why trust such opacity? Have you ever walked into a market and noticed one seller looks like they simultaneously don’t care yet also desperate to steal your wallet, your keys and your family?

I clicked through the links being pushed at me by the strangely hued PI to see what they are automatically taking away from me. It felt about as exciting as dipping a meat-stick into a pool of alligators who were trained at Stanford to inhabit digital moats and chew up GDPR advocates.

Things quickly went from bad to worse.

If you click on the TOS link, given how it’s presented as first and foremost, your browser is sent to an anchor reference at the end of the Privacy Policy. Scroll up to the top of the Privacy Policy (erase the anchor) and you’ll find a choice phrase used exactly three times:

  • Privacy Rights and Choices: Delete your account. You may request that we delete your account by contacting us as provided in the “How to Contact Us” section below.
  • To make a request, please email us or write to us as provided in the “How to Contact Us” section below.
  • If you are not satisfied with how we address your request, you may submit a complaint by contacting us as provided in the “How to Contact Us” section below.

Of course I searched below for that very specific and repetitive “How to Contact Us” section but nothing was found. NOTHING. NADA.

Not good. There’s no mistaking that the language, the code if you will, is repeatedly saying that “How to Contact Us” is being provided, while it’s definitely NOT being provided.

Oh well, I’m sure it’s just lawyers making some silly errors. But it brings to mind how the AI running PI probably should be expected to be broken, unsafe and full of errors and omissions or worse.

It stands to reason that if the PI policy very clearly and specifically calls out a rather important section title over and over again (perhaps one of the most important sections of all, relative to rights and protections), then accuracy would be helpful to those concerned about safety including privacy protections. Or a link could be a really good thing here, as they clearly use anchors elsewhere; this is a webpage we’re talking about.

Journalists provide some more insight into why PI.AI coding/scripting might be so strangely sparse, sloppy and seemingly unsafe.

Before Suleyman was at Graylock, he led DeepMind’s efforts to build Streams, a mobile app designed to be an “assistant for nurses and doctors everywhere,” alerting them if someone was at risk of developing kidney disease. But the project was controversial as it effectively obtained 1.6 million UK patient records without those folks’ explicit consent, drawing criticism from a privacy watchdog. Streams was later taken over by Google Health, which shut down the app and deleted the data.

A controversial founder seems to be known primarily for having taken unethical actions without consent. Interesting and unfortunately predictable twist that someone can jump from criticism by privacy watchdogs into even more unregulated territory.

With all the money that the PI.AI guys keep telling the press about, their privacy policy poorly worded with vague language, not to mention their errors and omissions, seems kind of rich and privileged.

Less than two months after the launch of their first chatbot Pi, artificial intelligence startup Inflection AI and CEO Mustafa Suleyman have raised $1.3 billion in new funding. […] “It’s totally nuts,” he admitted. Facing a potentially historic growth opportunity, Suleyman added, Inflection’s best bet is to “blitz-scale” and raise funding voraciously to grow as fast as possible, risks be damned.

Ok, ok, hold on a minute, we have a problem Houston. Terminology used is supposed to alert us to a blitz where risks be damned? Amassing funds to grow uncontrollably for a BLITZ!?

That talk sounds truly awful, eh? Is someone out there saying, you know what we need is to throw giant robots together as fast as possible, risks be damned, for a Blitz?

Call me a historian but it’s so 1940s hot-headed General Rommel right before he unwisely stretched forces into being outclassed and outmaneuvered by counter-intelligence in Bertram.

Source: “Images of War: The Armour of Rommel’s Afrika Korps” by Ian Baxter. Rommel’s men give him a look of disgust; “grow as fast as possible” orders fall apart due to deep integrity flaws.

I can’t even.

Back to digging around the sprawling Inflection registered domain names to find something, anything resembling contact information meant to be in the TOS and Privacy Policy, I found “heypi.com” hosted an almost identical looking page to PI.AI.

The Inflection color scheme or page hue of tropical gastrointestinal discomfort, for lack of a better “taupe” description, is quite unmistakable. Searching more broadly by poking around in that unattractive heypi.com page, wading through every detail, finally solved the case of obscure or omitted section for contact details.

Here’s the section without running scripts:

See what I mean by the color? And here it is when you run the scripts:

All that, just for them to say their only contact info is privacy@pi.ai? Ridiculous.

Some bugs are very disappointing, even though they’re legit errors that should be fixed. Now I just feel overly pedantic. Sigh. Meanwhile, there are plenty more problems with the PI Privacy Policy and TOS that need attention.