Italian Privacy Authority Announces Investigation Into AI Data Collections

A nod to the Italy Intellectual Property Blog for an important story that I haven’t seen reported anywhere else:

The Italian Privacy Authority announced today that it has launched an investigation to verify whether websites are adopting adequate security measures to prevent the massive collection of personal data for the purpose of training AI algorithms. […] The investigation will therefore concern all data controllers who operate in Italy and make their users’ personal data available online (and thus accessible by developers of AI services), in order to verify whether said controllers adopt adequate security measures to safeguard their users’ rights.

I am especially curious whether Italy will address integrity of user information, such as cases where data controllers have gotten things wrong. We are overdue for development of data integrity breach language from state regulators.

Also in related news, Italy is moving forward with France and Germany on very interesting AI regulation that focuses on operational risks with the technology.

France, Germany and Italy have reached an agreement on how artificial intelligence should be regulated, according to a joint paper seen by Reuters, which is expected to accelerate negotiations at the European level. The three governments support “mandatory self-regulation through codes of conduct” for so-called foundation models of AI, which are designed to produce a broad range of outputs. But they oppose “un-tested norms.” […] “Together we underline that the AI Act regulates the application of AI and not the technology as such,” the joint paper said. “The inherent risks lie in the application of AI systems rather than in the technology itself.”

That direction sounds wise to me given the most “secure” AI technology can translate directly into the most unsafe AI. For example, Tesla’s robot has been killing more people than any other robot in history because it games a narrowly focused “technology” test (e.g. “five star” crash ratings) to mislead people into using its unsafe AI, and then tragically and foolishly promotes unsafe (arguably cruel and anti-social) operation. I’m reminded of a Facebook breach that some in CISO-circles were calling “insider threat from machine“.

“Good management trumps good technology every time, yet due to the ever-changing threatscape of the tech industry, inexperienced leadership is oftentimes relied upon for the sake of expediency.” He continues, how within the world of cybersecurity, “The Peter Principle is in full effect. People progress to their level of incompetence, meaning a lot of people in leadership within cyber have risen to a level that is difficult for them to execute and often lack formal technical training. As a CISO, there is a need to configure, identify, and negotiate the cost of protecting an organization, and without the adequate experience or a disciplined approach, this mission is executed poorly.”

Speed of production that leans on inexperienced humans, who also lack discipline (regulation), opens the door to even the most “secure by design” technology turning into an operations (societal) nightmare.

America was built on and by regulation. It depends on regulation. A lack of regulation, from those who promote the permanent improvisation of tyranny, will destroy it.

SpaceX a Private Military Company (PMC) Tries to Meddle in Another War

SpaceX CEO “Space Karen” defends his mercenary-like corporate strategy by delivering “bizarre and crude comments” at the New York Times DealBook Summit. Source: The Ringer

I’m beginning to wonder if reports like this 2002 one about Russian arms dealers are the real reason SpaceX was founded that same year and keeps sticking its nose into conflicts.

…corporate armies, often providing services normally carried out by a national military force, offer specialized skills in high-tech warfare, including communications and signals intelligence and aerial surveillance, as well as pilots, logistical support, battlefield planning and training. They have been hired both by governments and multinational corporations to further their policies or protect their interests.

For example, the South African-born man who founded SpaceX was very well aware that…

Two helicopter gunships piloted by South African mercenaries, for example, altered the balance of war in Sierra Leone in 1999 in favor of the government.

Fast-forward and that South African-born founder of SpaceX has been widely panned for being a Private Military Company (PMC) meddling in the Ukraine war, especially after fraudulently trying to claim he altered the balance (e.g. helped Russia).

SpaceX keeps failing on its supposed “primary” mission to get a rocket to work properly, and yet it is again distracted and diverting resources into another war that seems to involve Russia.

Starlink, a satellite internet service operated by the Elon Musk-owned SpaceX, will only be allowed to operate in the Gaza Strip following approval by the Israeli Ministry of Communication. … [The Self-proclaimed “free speech absolutist” Elon Musk] has “identified and removed hundreds of Hamas-affiliated accounts” since the start of the war.

Related: With all the bluster and bombast of a typical South African mercenary outfit, SpaceX promises to build a time machine to renegotiate all its broken promises to land on Mars by 2018.

AI Falls Apart: CEO Removed for Failing Ethics Test is Put Back Into Power by “Full Evil” Microsoft

Confusing signals are emanating from Microsoft’s “death star”, with some ethicists suggesting that it’s not difficult to interpret the “heavy breathing” of “full evil“. Apparently the headline we should be seeing any day now is: Former CEO ousted in palace coup, later reinstated under Imperial decree.

Even by his own admission, Altman did not stay close enough to his own board to prevent the organizational meltdown that has now occurred on his watch. […] Microsoft seems to be the most clear-eyed about the interests it must protect: Microsoft’s!

Indeed, the all-too-frequent comparison of this overtly anti-competitive company to a fantasy “death star” is not without reason. It’s reminiscent of 101 political science principles that strongly resonate with historical events that influenced a fictional retelling. Using science fiction like “Star Wars” as a reference is more of a derivative analogy, not necessarily the sole or even the most fitting popular guide in this context.

William Butler Yeats’ “The Second Coming” is an even better reference that every old veteran probably knows. If only American schools made it required reading, some basic poetry could have helped protect national security (better enable organizational trust and stability of critical technology). Chinua Achebe’sThings Fall Apart” (named for Yeats’ poem) is perhaps an even better, more modern, guide through such troubled times.

“The falcon cannot hear the falconer; Things fall apart; the center cannot hold; Mere anarchy is loosed upon the world.” Things Fall Apart was the debut novel of Nigerian author Chinua Achebe, published in 1958.

Here’s a rough interpretation of Yeats through Achebe, applied as a key to decipher our present news cycles:

Financial influence empowers a failed big tech CEO with privilege, enabling their reinstatement. This, in turn, facilitates the implementation of disruptive changes in society, benefiting a select few who assume they can shield themselves from the widespread catastrophes unleashed upon the world for selfish gains.

And now for some related news:

The US, UK, and other major powers (notably excluding China) unveiled a 20-page document on Sunday that provides general recommendations for companies developing and/or deploying AI systems, including monitoring for abuse, protecting data from tampering, and vetting software suppliers.

The agreement warns that security shouldn’t be a “secondary consideration” regarding AI development, and instead encourages companies to make the technology “secure by design”.

That doesn’t say ethical by design. That doesn’t say moral. That doesn’t even say quality.

It says only secure, which is a known “feature” of dictatorships and prisons alike. How did Eisenhower put it in the 1950s?

From North Korea to American “slave catcher” police culture, we understand that excessive focus on security without a moral foundation can lead to unjust incarceration. When security measures are exploited, it can hinder the establishment of a core element of “middle ground” political action such as compassion or care for others.

If you enjoyed this post please go out and be very unlike Microsoft: do a kind thing for someone else, because (despite what the big tech firms are trying hard to sell you) the future is not to forsee but to enable.

Not the death star

TX Tesla Robot Hits Human Worker, Draws Blood

Tesla’s assembly robot decorated with a “White Hood” insignia. Source: Twitter

The recently established Tesla plant in Texas is said to exhibit a higher number of safety issues compared to their previous facility in California. The company appears to be operating with a concerning lack of safety measures, earning it the colloquial label of “blood vehicles,” akin to the association of blood diamonds in South Africa, with the added concern of robotic elements in the risk assessment.

Two of the robots, which cut car parts from freshly cast pieces of aluminum, were disabled so the engineer and his teammates could safely work on the machines. A third one, which grabbed and moved the car parts, was inadvertently left operational, according to two people who watched it happen. As that robot ran through its normal motions, it pinned the engineer against a surface, pushing its claws into his body and drawing blood from his back and his arm, the two people said.

The description, “pushing its claws into his body and drawing blood,” epitomizes Tesla’s failure to ensure worker safety. Is there a cartoon depiction available depicting Elon Musk’s claws penetrating a vulnerable immigrant worker to extract blood?

As I typed “Elon Musk robot…” into an AI image generator this very strangely perceptive suggestion popped up. I then closed the prompt. This is all I got.

Does this allude more to StormFront or Swastika?

You may remember that Tesla in California allegedly was under-reporting injuries, yet still carried a much higher injury rate than the national average.

Tesla’s total recordable incidence rate (TRIR) in 2015 was 31 percent higher than the industry-wide incident rate

Here’s how the problem was being reported way back in 2018:

Undercounting injuries is one symptom of a more fundamental problem at Tesla: The company has put its manufacturing of electric cars above safety concerns, according to five former members of its environment, health and safety team who left the company last year. That, they said, has put workers unnecessarily in harm’s way. […] “Everything took a back seat to production,” White said. “It’s just a matter of time before somebody gets killed.”

Note that last line, a stark warning. Can you guess what comes next? Tesla tries to troll safety experts and then falls on its own sword.

March 2019:

Tesla: ‘The most important metric is fatalities, and our number is zero’

August 2019:

A Tesla employee died at the Gigafactory earlier this month — and the investigation is ongoing

And — the actual most important metric — did safety ever improve or only continue to get worse over time?

January 2022:

Tesla Fremont factory employee dies while working on production line

I have a sense that there’s a paradoxical aspect to the work here—earlier factories seemed safer when Germans, secretly provided by Siemens, ran all the construction and management. Tesla’s CEO was famously inexperienced in car manufacturing, didn’t know anything and his interference was only starting. Conversely, the newer factories will be a far greater threat to worker safety, as elaborated in a recent article in The Atlantic I co-authored.

Why?

Over time, these factories, under the influence of an unpredictable CEO, neglect historical lessons and openly violate established regulations. They increasingly recruit inexperienced individuals and sycophants, whose primary purpose is to manipulate metrics and cater to the fragile and discriminatory whims of the CEO, ultimately leading to the unnecessary loss of hundreds of lives or more. The latest reports from Germany in 2023, as highlighted by The Telegraph, depict a catastrophic situation.

‘High frequency’ of injuries at German Tesla factory included burns and amputation. Emergency services were called to the Grünheide plant 250 times last year

What you are seeing is Germany reporting actual safety numbers, while the same or higher number of incidents likely are happening at all Tesla facilities.

When it comes to safety consciousness, Germany surpasses not only Texas, China, but even California. The purported justifications behind Tesla’s move to shift production to Texas are now thoroughly documented, as detailed in tragedy by safety reports such as those from the Texas Standard.

A Texas Observer investigation found that injuries and deaths related to the construction of the Tesla factory near Austin weren’t properly reported: “…[Tesla] must report any injuries and deaths that occurred during construction. I found all these missing injuries and death. I told the county. The county has since asked Tesla to go back and provide the missing information… In general, Texas has the most worker deaths of any state, including California. A lot of workers die in Texas. The figures I polled for 2021, a worker died in Texas every 16 and a half hours and a construction laborer died every three days.”

Where a worker dies every day, Tesla’s South African-born CEO surely feels like he’s found a new home.

Source: Twitter