The ban of the Apple Watch Series 9 and Ultra 2 goes into effect now, and only in America, based on an International Trade Commission (ITC) determination.
The ITC issued the ban after finding that Apple infringed on blood oxygen saturation technology patented by a company called Masimo. It also ordered Apple to stop selling any previously-imported devices with the infringed-upon tech.
Both companies are headquartered in California. Apparently the dispute is directly related to more than 20 engineers targeted and poached by Apple from Masimo to recreate patented technology without paying for the rights.
“This is not an accidental infringement — this is a deliberate taking of our intellectual property,” he continued, accusing Apple of hiring more than 20 engineers from his company. “I am glad the world can now see we are the true inventors and creators of these technologies.”
The company said it met with Apple about integrating its technology into Apple products in 2013, and that Apple later hired away several of its employees.
I can understand why Masimo is expressing dissatisfaction about its engineers being invited to discuss matters with Apple in 2013 and then hired away. It likely stems from the fact that a particular Sony smartwatch had gained significant attention in 2012.
Notably, the initial Apple Watch being designed at this time bore a very striking resemblance to the earlier Sony watch. This kind of detail holds significance in the dispute, suggesting that Apple may have been closely observing and potentially sourcing innovations from others including Sony and Masimo.
For those unfamiliar with what the $150 2012 Sony Xperia SmartWatch actually looked like (an upgrade of the 2010 Sony LiveView Watch, which likely first got Apple thinking), here’s an image to jog your memory.
Source: Sony 2012
From that view, here’s what an Apple watch looked like basically two years later, during the time Masimo was summoned by Apple to reveal their sensor technology.
Source: ArsTechnica pre-release review of the first Apple watch, 2014
Fun Silicon Valley insider fact: way back in 2010 I warned clearly how an Apple lag in innovation was a little under two years, such that I found them shamelessly representing Japanese technology as their own ideas.
The Panasonic W5 was released September 12, 2006 while the Apple MacBook Air was announced on January 15, 2008. The W5, although nearly two years earlier, came with numerous advantages over the Air.
And here’s where she explains her preference for spoken performance versus written format, how she “didn’t submit to magazines, we submitted to audiences”.
When slam became popular it was colonised. The middle classes flocked to it as a way of shortcutting their careers – win a title, win a career. They had completely overlooked that slam was never about the winner, but about the elevation of distinct diverse voices, and their relationship to the audience. It was a community event, a bridge between ideas and audience. It was political at its core, allowing stories of poverty, racism, sexism, exclusion, and the language of the streets to flourish. To connect. And its popularity depended on this. But when the middle classes, clutching their tidy notebooks and tidy mouths, invaded, they needed to change the content expectation of the events; they could not after all speak from their own experiences and have a chance of winning. And so, the project to belittle working-class poetry began again. They called the poems ‘confessional’, they called them ‘hysterical’, they called the poetry ‘trauma for points’. They criticised the rough vocabulary, the directness of the pieces offered. They policed language and vocabulary and content. They patrolled our mouths. And it worked. It always does.
[…]
But it is vital that we value these nights, these beginnings of poems. Spoken word is the last free art – in no other art form is there an equivalent to the open mic, for example. Imagine an opera preceded by locals trying out vocals rehearsed in their bedsits. It is rare in other art forms for participants to elevate to the feature within a few months of beginning. It is a community of exiles building not just a platform and a following, but a home.
I’m reminded of the Clash belting into microphones.
In a war-torn swamp stop any mercenary
And check the British bullets in his armoury
Dare I also mention here the fifth subject of Plato’s Phaedrus was “superiority of the spoken over the written word”?
Related news is that poets today need independent publication paths, a modern digital printing press, yet the privatization and over centralization of the web threatens their freedom. The subtext here is not good:
Poetry sales boom as Instagram and Facebook take work to new audiences
Which reminds me of the story of poetry.org, founded in 1995 to make poetry accessible online from everyone to everyone, only to be sued by an aggressive bank executive (Utility Industry M&A — Enron) who claimed he had a trademark on the word “poetry” and thus an entirely cornered market.
…at least 20 pages of the Trump Institute textbooks were lifted in near-entirety from a book in the “Real Estate Mastery System,” a 1995 series completely unaffiliated with Trump.
On its Idaho webpage, the campaign copied and pasted local radio station KBSX’s 2012 article on election law, removing the author’s name and renaming the story “REQUIREMENTS TO VOTE FOR TRUMP.”
Not only had the article been plagiarized, it was also advertised out-of-date voter information.
“Clearly we were not contacted by the Trump campaign for permission to use old content from our website,” Peter Morrill, KBSX’s general manager said…
On Nov 11, 2012, six days after President Obama won re-election, Trump filed to trademark the now-infamous campaign slogan “Make America Great Again.” But “Make America Great Again” was a famous slogan in Ronald Reagan’s presidential campaign, “employed prominently in everything from buttons to posters to his acceptance speech at the Republican convention.”
The Trump fraudulently claimed that he alone in 2012 invented the phrase that Ronald Reagan had very loudly used in a presidential campaign on the 1980 GOP main stage:
It’s right there, obvious for everyone to see why and how the Trump stole from others.
Moving on, the heart of the problem perhaps is related to thirst for illegitimate power. Stealing is a symptom of unfair competition practiced among Americans known to embrace hate.
…perhaps Trump’s most flagrant cases of copy-paste appear on Twitter, where he often copies text and images, sometimes from users with names like “WhiteGenocideTM. …[and those] celebrating the death of a noted Holocaust survivor.
The conclusive remark on Trump’s plagiarism should have unequivocally been “America First.” Historically, this phrase carries a toxic and racist connotation, tracing back to the “nativist” anti-immigrant politics of the late 1800s. It’s crucial to remember that President Woodrow Wilson revived the Ku Klux Klan (KKK) in 1915, adopting the racist slogan “America First.” However, contemporary perceptions often link this slogan only to Hitler posing a genocidal global threat, unfortunately overlooking more than 100 years of racist violent domestic terrorist groups in America continuously using the phrase to express hate.
“America First will be the major and overriding theme of my administration,” Trump announced in a foreign policy speech. Unfortunately, “America First” was already claimed in the 1940s, by an American nationalist movement, which—among anti-semitic and isolationist campaigns—encouraged the country to do business with Hitler.
The Trump calling himself America First thus wasn’t trying to hide the hate, but he of course eliminated attribution. There is no evidence he didn’t mean to continue the absolute most racist things by stealing the KKK phrase, while pretending the words were his. He may as well have been brazenly copying Hitler speeches and claiming them to be his own.
In other words, Trump’s response when criticized for using Hitler’s language was to acknowledge the criticism and then to use it again.
This plagiarist guy. He literally stole from Reagan, KKK, and Hitler. Could he be any more racist?
He goes around saying what Hitler said, while trying to take credit for it as if he’s the first guy to ever think up Nazism.
…it was my friend Marty Davis from Paramount who gave me a copy of ‘Mein Kampf,’ and he’s a Jew,” Trump told Brenner. Brenner then asked Marty Davis whether he gave Trump a copy of the book. “I did give him a book about Hitler,’ Davis told her. “But it was ‘My New Order,’ Hitler’s speeches, not ‘Mein Kampf.’ I thought he would find it interesting. … but I’m not Jewish.”
…stealing ancient Israeli artifacts isn’t surprising. Trump all but ransacked the White House on his way out of office, most notably absconding with hundreds of classified documents.
Is there any limit to the audacity? Perhaps the next will be Trump asserting ownership of a supposedly newly invented holiday by him, called Hanukkah, demanding payment from anyone who celebrates. The blatant and disgraceful abuse of public office, coupled with a history of shameless plagiarism as evident in repeated unapologetic copying, makes one wonder if his anti-American sentiments should ever be unexpected.
[The Trump’s 2017] statement, released about 34 minutes after Exxon put out its announcement, copied a full paragraph from Exxon’s statement… Trump heavily complimented Exxon, which was most recently run by his appointment to Secretary of State Rex Tillerson…
If there was genuine concern about high-profile plagiarism, as extensively documented by the Daily Beast in their report, the Trump always should have been a primary focus of scrutiny. It seems inconceivable that someone with such a history of lies, cheats and theft could have met qualifications to compete for any public or private office.
I’m noticing again that ChatGPT is so utterly broken that it can’t even correctly count and track the number of letters in a word, and it can’t tell the difference between random letters and a word found in a dictionary.
Here’s a story about the kind of atrociously low “quality” chat it provides, in all its glory. Insert here an image of a toddler throwing up after eating a whole can of alphabet soup
Ready?
I prompted ChatGPT with a small battery of cipher tests for fun, thinking I’d go through them all again to look for any signs of integrity improvement in the past year. Instead it immediately choked and puked up nonsense on the first and most basic task, in such a tragic way the test really couldn’t get started.
It would be like asking a student in English class, after a year of extensive reading, to give you the first word that comes to mind and they say “BLMAGAAS”.
F. Not even trying.
In other words (pun not intended) when ChatGPT was tested with a well-known “Caesar” substitution that shifts the alphabet three stops to encode FRIENDS (7 letters) it suggested ILQGHVLW (8 letters).
I had to hit the emergency stop button. I mean think about this level of security failure where a straight substitution of 7 letters becomes 8 letters.
If you replace each letter F-R-I-E-N-D-S with a different one, that means 7 letters returns as 7 letters. It’s as simple as that. Is there any possible way to end up with 8 instead? No. Who could have released this thing to the public when it tries to pass 8 letters off as being the same as 7 letters?
I immediately prompted ChatGPT to try again, thinking there would be improvement. It couldn’t be this bad, could it?
It confidently replied that ILQGHVLW (8 letters) deciphers to the word FRIENDSHIP (10 letters). Again the number of letters is clearly wrong, as you can see me replying.
And also noteworthy is that it was claiming to have encoded FRIENDS, and then decoded it as the word FRIENDSHIP. Clearly 7 letters is neither 8 nor 10 letters.
Excuse me?
The correct substitution of FRIENDS is IULHQGV, which you would expect this “intelligence” machine to do without fail.
It’s trivial to decode ChatGPT’s suggestion of ILQGHVLW (using 3 letter shift of the alphabet) as a non-word. FRIENDS should not encode and then decode as an unusable mix of letters “FINDESIT”.
How in the world did the combination of letters FINDESIT get generated by the word FRIENDS, and then get shifted further into the word FRIENDSHIP?
Here’s another attempt. Note below that F-R-I-E-N-D-S shifted three letters to the right becomes I-U-L-H-Q-G-V, which unfortunately is NOT the answer that ChatGPT responds with.
Why do those last three letters K-A-P get generated by ChatGPT for the cipher?
WRONG, WRONG, WRONG.
Look at the shift. The (shifted) letters K-A-P very obviously get decoded to the (original) letters H-X-M, which would leave us with a decoded F-R-I-E-H-X-M.
FRIEHXM. Wat. ChatGPT “knows” the input was FRIENDS, and it “knows” deciphering fails if different.
Upon closer inspection, I noticed how these last three letters were oddly inverted. The encoding process opaquely flipped itself backward. That’s how it encoded a non-word F-R-I-E…K-A-P.
In simpler terms, ChatGPT flipped itself into a reverse gear half-way, incorrectly using N->K (shift left 3 letters) instead of the correct encoding N->Q (shift right 3 letters).
Thus, in cases where it starts with a shift key of F->I, we see a very obvious and easy to explain mistake of K->N (abrupt inversion of the key, shift left 3 letters).
Given there’s no H-X-M in FRIENDS… hopefully you grasp the issue with claiming a K-A-P where the first letter F was encoded as I, and understand how the simple substitution is so blatantly incorrect.
This may seem long-winded, yet it represents a highly problematic and faulty logic inversion at the most simple stage of test. Imagine trying to explain integrity failure of a far more complex subject with multi-layered and historical encoding like health or civil rights.
There are very serious integrity breach implications here.
Can anyone imagine a calculator company boasting a rocket-like valuation to billions of users and dollars invested by Microsoft and then presenting…
We found that ChatGPT’s performance changes dramatically based on the requirement to show its work, failing 20% of the time when it provides work compared with 84% when it does not. Further several factors about MWPs relating to the number of unknowns and number of operations that lead to a higher probability of failure when compared with the prior, specifically noting (across all experiments) that the probability of failure increases linearly with the number of addition and subtraction operations.
We are facing a significant security failure that cannot be emphasized enough as truly dangerous to release to the public without serious caution.
When ChatGPT provides inaccurate or nonsensical answers, such as stating “42” as the answer to the meaning of life, or asserting that “2+2=gobble,” some people are too quick to accept such instances as evidence that only certain/isolated functions are unreliable, as if there must be some vague greater good (like hearing the awful fallacy that at least fascists made the trains run on time).
Similarly, when ChatGPT fails in a serious manner, such as generating harmful content related to racism or societal harm, it is often too easily waved away or made worse.
In order to make ChatGPT less violent, sexist, and racist, OpenAI hired Kenyan laborers, paying them less than $2 an hour. The laborers spoke anonymously… describing it as “torture”…
At a certain point, we need to question why the standard for measuring harm is being so aggressively lowered to the extent that a product is persistently toxic for profits without any real sense of accountability.
Back in 1952, tobacco companies spread Ronald Reagan’s cheerful image to encourage cigarette smoking, preying on people’s weaknesses. What’s more, they employed a deceptive approach, distorting the truth to undercut the unmistakable and emphatic scientific health alerts about cancer at the time. Their deliberate strategy involved manipulating the criteria for assessing harm. They were well aware of their tactics.
Ronald Reagan played a significant role in exploitation campaigns, which are claimed to have caused the deaths of at least 16 million Americans. It wasn’t until data integrity controls were strengthened that the vulnerability was addressed.
This is the level of massive integrity breach that may be necessary to contextualize the “attraction” to OpenAI. A “three sheets to the wind” management of public risk also reminds me of CardSystems level of negligence to attend to basic security.
Tens of Millions of Consumer Credit and Debit Card Numbers Compromised
The CardSystems incident was pivotal, underscoring the undeniable harms associated with it. Sixteen million Americans succumbed to tobacco-related deaths over decades, then tens of millions of American payment cards were compromised in systems-related breaches over years.
Although these were distinct issues, they shared a common thread of need for regulatory intervention and showed accelerations of harm from inaction, which is very much what OpenAI should be judged against. Look at the heavily studied Chesterfield ad above one more time, and then take a long look at this:
The last time big companies blew this much smoke, sixteen million Americans died.
Honestly I expected ChatGPT to complain that the Chesterfield ad with Ronald Reagan was running the same year in direct response to scientific study, not two years after. Here’s how Bing’s AI chat handled the same question, for comparison.
Did you expect Microsoft AI to promote smoking? You probably should now.
Microsoft seems to be actively promoting smoking to users as a cute commentary, arguably far worse than OpenAI forgetting whether Reagan promoted it. Also the Christmas ad campaign in question was not 1948, it was 1952. Bing failed to process a correct year. Alas, these AI systems pump into the public obvious integrity failures one after another.
The tobacco industry’s program to engineer the science relating to the harms caused by cigarettes marked a watershed in the history of the industry. It moved aggressively into a new domain, the production of scientific knowledge, not for purposes of research and development but, rather, to undo what was now known: that cigarette smoking caused lethal disease. If science had historically been dedicated to the making of new facts, the industry campaign now sought to develop specific strategies to “unmake” a scientific fact.
The very large generative AI vendors fit only too neatly into what you can see was described in the above quote as a production process to “‘unmake’ a scientific fact“… and for financial gain.
In 1775, in his book, Chirurgical Observations, London physician Percival Pott noted an unusually high incidence of scrotal cancer among chimney sweeps. He suggested a possible cause… an environmental cause of cancer was involved. Two centuries later, benzo(a)pyrene, a powerful carcinogen in coal tar, was identified as the culprit.
Carcinogens of tar were studied and known harmful since the late 1700s? The timing of scientific fact gathering for “intelligence” sounds very similar to a worldwide abolition of selling humans (another ChatGPT test it failed), except that somehow selling tobacco was continued another 100 years longer than slavery, while killing tens of millions of people.
Let’s go back to considering the magnitude of negligence in privacy breaches of trust like CardSystems, let alone the creepily widespread and subtle ones like the privacy risk of Google calculator. Map of Google calculator network traffic flows
Everyone now needs to brace themselves for low-integrity products such as the AI calculator that can’t do math — failure to deliver information reliably with quality control — perhaps racing us toward the highest levels of technology mistrust in history. Unless there’s an intervention compelling AI vendors to adhere to basic ethics, establishing baseline integrity control requirements such as how cholera was proven unsafe in water, safety failures are poised to escalate significantly.
The landscape of security controls to prevent privacy loss underwent a significant transformation in response to the enactment of California’s SB1386, a necessary change and driven only by the breach laws and their implications. After 2003 the term “breach” took on a more concrete and enforceable significance in relation to potential dangers and risks. Companies finally found themselves compelled to take fast action to prevent their own market from collapsing due to predictable lack of trust.
But twenty years ago the breach regulators were focused entirely on confidentiality (privacy)… and now we are deep into the era of widespread and PERSISTENT INTEGRITY BREACHES on a massive scale, an environment seemingly devoid of necessary integrity regulations to maintain trust.
The dangers we’re seeing right here and now in 2023 serve as a stark reminder of the kind of tragically inadequate treatment of privacy in the days before related breach laws were established and enforced.
The good news is there are simple technical solutions to these AI integrity breach risks, almost exactly like there were simple technical solutions to cloud privacy breach risks. Don’t let anyone tell you otherwise. As a journeyman with three decades of professional security work to stop harms (including extensive public writing and speaking), I can explain and prove both solutions immediately viable. It’s like asking me “what’s encryption” in 2003.
The bad news is the necessary innovations and implementations of these open and easy solutions will not happen soon enough without regulation and strong enforcement.
a blog about the poetry of information security, since 1995