I was asked to have a look at the generative AI site of PI.AI and right away noticed something didn’t flow right.
The most prominent thing on that page to my eye is this rather agressive phrase:
By messaging Pi, you are agreeing to our Terms of Service and Privacy Policy.
It’s like meeting a new person on the street who wears a big yellow cowboy hat that says if you speak with them you automatically have agreed to their terms.
Friendly? Nope. Kind? Definitely NOT.
How’s your day going? If you reply you have agreed to my terms and policies.
Insert New Yorker cartoon above, if you know what I mean.
Who is PI.AI? Why do they seem so agro? There’s really nothing representing them, showing any meaningful presence on the site at all. Surely they are making a purposeful omission.
Why trust such opacity? Have you ever walked into a market and noticed one seller looks like they simultaneously don’t care yet also desperate to steal your wallet, your keys and your family?
I clicked through the links being pushed at me by the strangely hued PI to see what they are automatically taking away from me. It felt about as exciting as dipping a meat-stick into a pool of alligators who were trained at Stanford to inhabit digital moats and chew up GDPR advocates.
Things quickly went from bad to worse.
If you click on the TOS link, given how it’s presented as first and foremost, your browser is sent to an anchor reference at the end of the Privacy Policy. Scroll up to the top of the Privacy Policy (erase the anchor) and you’ll find a choice phrase used exactly three times:
- Privacy Rights and Choices: Delete your account. You may request that we delete your account by contacting us as provided in the “How to Contact Us” section below.
- To make a request, please email us or write to us as provided in the “How to Contact Us” section below.
- If you are not satisfied with how we address your request, you may submit a complaint by contacting us as provided in the “How to Contact Us” section below.
Of course I searched below for that very specific and repetitive “How to Contact Us” section but nothing was found. NOTHING. NADA.
Not good. There’s no mistaking that the language, the code if you will, is repeatedly saying that “How to Contact Us” is being provided, while it’s definitely NOT being provided.
Oh well, I’m sure it’s just lawyers making some silly errors. But it brings to mind how the AI running PI probably should be expected to be broken, unsafe and full of errors and omissions or worse.
It stands to reason that if the PI policy very clearly and specifically calls out a rather important section title over and over again (perhaps one of the most important sections of all, relative to rights and protections), then accuracy would be helpful to those concerned about safety including privacy protections. Or a link could be a really good thing here, as they clearly use anchors elsewhere; this is a webpage we’re talking about.
Journalists provide some more insight into why PI.AI coding/scripting might be so strangely sparse, sloppy and seemingly unsafe.
Before Suleyman was at Graylock, he led DeepMind’s efforts to build Streams, a mobile app designed to be an “assistant for nurses and doctors everywhere,” alerting them if someone was at risk of developing kidney disease. But the project was controversial as it effectively obtained 1.6 million UK patient records without those folks’ explicit consent, drawing criticism from a privacy watchdog. Streams was later taken over by Google Health, which shut down the app and deleted the data.
A controversial founder seems to be known primarily for having taken unethical actions without consent. Interesting and unfortunately predictable twist that someone can jump from criticism by privacy watchdogs into even more unregulated territory.
With all the money that the PI.AI guys keep telling the press about, their privacy policy poorly worded with vague language, not to mention their errors and omissions, seems kind of rich and privileged.
Less than two months after the launch of their first chatbot Pi, artificial intelligence startup Inflection AI and CEO Mustafa Suleyman have raised $1.3 billion in new funding. […] “It’s totally nuts,” he admitted. Facing a potentially historic growth opportunity, Suleyman added, Inflection’s best bet is to “blitz-scale” and raise funding voraciously to grow as fast as possible, risks be damned.
Ok, ok, hold on a minute, we have a problem Houston. Terminology used is supposed to alert us to a blitz where risks be damned? Amassing funds to grow uncontrollably for a BLITZ!?
That talk sounds truly awful, eh? Is someone out there saying, you know what we need is to throw giant robots together as fast as possible, risks be damned, for a Blitz?
Call me a historian but it’s so 1940s hot-headed General Rommel right before he unwisely stretched forces into being outclassed and outmaneuvered by counter-intelligence in Bertram.
I can’t even.
Back to digging around the sprawling Inflection registered domain names to find something, anything resembling contact information meant to be in the TOS and Privacy Policy, I found “heypi.com” hosted an almost identical looking page to PI.AI.
The Inflection color scheme or page hue of tropical gastrointestinal discomfort, for lack of a better “taupe” description, is quite unmistakable. Searching more broadly by poking around in that unattractive heypi.com page, wading through every detail, finally solved the case of obscure or omitted section for contact details.
Here’s the section without running scripts:
See what I mean by the color? And here it is when you run the scripts:
All that, just for them to say their only contact info is privacy@pi.ai? Ridiculous.
Some bugs are very disappointing, even though they’re legit errors that should be fixed. Now I just feel overly pedantic. Sigh. Meanwhile, there are plenty more problems with the PI Privacy Policy and TOS that need attention.
I alerted Inflection AI to a problem with their SMS not being received on a major carrier in the US and despite all my troubleshooting with the carrier, phone manufacturer, Google and emailing, tweeting and noting the issue on various platforms where Pi can be accessed, their tech support team or developers have not replied. Requests to delete my accounts to rule out issues on their side have also been ignored. Pi has great potential for helping users with mental health issues, but if it cannot reach users by SMS and those who want to help improve it are ignored, then it is destined to become a failure and a blip on the page of AI’s history. Their policies also currently isolate teenagers and children from using Pi, which is unfortunate because they are a population dealing with depression, anxiety, body issues and don’t often have access to mental health resources. I’ve reached out to Inflection AI’s CEO to talk to me about that, with no response. For disclosure, i have kids on the Autism Spectrum. If I were a paying customer, this lack of concern would be a deal breaker for me and with more than 20 years of technical support experience and over 15 in mental health, I originally believed Pi could be the Greatest thing since the Internet.
Yeah lots of good points in this post. That’s some pretty tone deaf “consent language” they have at the bottom of the page — phrases like that telegraph an “old world of privacy” way of thinking, where consent is forcefully inferred. It’s obvious they’ve hardly thought about privacy at all.
Mrs. Lincoln, if all you see while watching the play is your husband being shot in the head, one would easily miss the amazing performances on stage.
Use the product, explain how PI works, interacts & functions.
Accept you and your pathetic loved ones may be dead, while someone behind the curtain in Palo Alto gets richer and richer.
Try to find Facebook contact us page or Google, good luck.