OpenAI Board Rotates From CIA to NSA

You may recall when people discussed the OpenAI board in terms of Helen Toner and Tasha McCauley, who became ex-members as Sam Altman was ushered back in a flurry of propaganda by Microsoft.

When we were recruited to the board of OpenAI—Tasha in 2018 and Helen in 2021—we were cautiously optimistic that the company’s innovative approach to self-governance could offer a blueprint for responsible AI development. But based on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives.

Helen Toner, of a certain Security Studies program at Georgetown University, was perhaps recruited as a voice of reason in terms of helping manage In-Q-Tel interests. Pushed out by Microsoft’s “full evil” team, she was not exactly working on security in terms of safe operations for a wobbly back-stabbing startup culture, if you know what I mean.

Now OpenAI is pivoting quite awkwardly with a new announcement in an entirely different direction towards “enterprise” operations overseen by an ex-director of the NSA.

OpenAI on Thursday announced its newest board member: Paul M. Nakasone, a retired U.S. Army general and former director of the National Security Agency. Nakasone was the longest-serving leader of the U.S. Cyber Command and chief of the Central Security Service.

“Mr. Nakasone’s insights will also contribute to OpenAI’s efforts to better understand how AI can be used to strengthen cybersecurity by quickly detecting and responding to cybersecurity threats,” OpenAI said in a blog post.

The company said Nakasone will also join OpenAI’s recently created Safety and Security Committee. The committee is spending 90 days evaluating the company’s processes and safeguards before making recommendations to the board and, eventually, updating the public, OpenAI said.

Notably Nakasone joins as the only person on the board with national level information security expertise and experience related to massive operations. Arguably these have been sorely missing from the OpenAI board, given its attempts to both terrify everyone with instability yet want to appear indispensible to American stability interests.

Looking on the bright side, perhaps Nakasone’s appointment will open his eyes and help him articulate privately to the Pentagon why Palantir’s ongoing nonsense should be driven hard out of town. As much as I find OpenAI to be poorly run, opaque and repeatedly stumbling over its questionable intentions (all attributes of Palantir), it still seems fundamentally better than the immoral things an historically ignorant Peter Thiel has done.

Who can forget Palantir paying U.S. Congressmen to viciously attack the U.S. Army in order to shamelessly force the government into buying Palantir products? Even more to the point, Palantir sued in court to force the Army into buying proprietary products from them designed to lock-in customers.

Then again OpenAI could be an even worse version of Stanford-laced Thielism.

Elon Musk Argues KKK Hoods Good for “Edgy” Expression and Protecting Public Image

Edgy. That’s the term the misogynist hate platform Twitter has started using to justify why it has started deploying digital Klan hoods to protect toxic accounts from accountability. I mean edgy accounts.

Musk and his engineers say that the update is a matter of encouraging free expression. “Important to allow people to like posts without getting attacked for doing so!” Musk argues. His director of engineering says, “Public likes are incentivizing the wrong behavior. For example, many people feel discouraged from liking content that might be ‘edgy’ in fear of retaliation from trolls, or to protect their public image.”

One obviously “gaslit” part of this is how Elon Musk runs multiple inauthentic user accounts where he can like things without any affect on his public image.

On the one hand he argues that he doesn’t want any inauthentic accounts on a platform (while running his own inauthentic accounts), and then on the other hand he argues he doesn’t want people on the platform to be seen for who they are. Nothing means anything, there is no truth, only the “will” of dear leader that changes by the minute to suit his own desires.

Pretty sure in other words his flamingly obvious “gaslit” stance means he really wants to censor and curate his platform into the single largest hate rally in human history.

Rhodesia’s Dead — but White Supremacists Have Given It New Life Online

Elon Musk being from South Africa is notable, given his affinity for Rhodesian toxic doublespeak about freedom of speech used to spread hate and censor people. I remember and recognize it well.

He quotes the Declaration of Rights which guarantees freedom of discussion, freedom of speech and expression—and therefore, of agitation—with no discrimination on grounds of race, colour tilt creed. If from now on that were to be the basis of Government in Southern Rhodesia I should be very much persuaded and certainly play my part in persuading the Africans, but I think every one of us who is fair-minded will agree that the present situation in Southern Rhodesia in these respects is the worst in the whole Commonwealth. […]

Bearing in mind the unhappy history of white minority government in Southern Africa, we should say this afternoon that we will not surrender our reserved powers before existing legislation in Southern Rhodesia which is incompatible with the Declaration of Rights is removed. Unless we do that, we shall make a mockery of the whole inclusion of the Declaration of Rights and get the white man labelled as a hypocrite across the length and breadth of Africa.

Elon Musk has left his unhappy family’s failure at running white nationalism on the African continent only to expand such rank hypocrisy more broadly.

After Years of Targeted Ads LinkedIn Busted for Ignoring User Consent Requirements

Related: Whistleblower proves Microsoft chose profit over safety, leaving U.S. government vulnerable to Russian attack.

You may remember in 2022 that a social media company faced a massive formal complaint by U.S. regulators. It said the ad targeting and delivery system was illegally exploiting user data it collected, including race, religion, sex, disability, national origin, or familial status. Basically advertisers were being given unauthorized access to users’ private data in order to target people on protected characteristics or proxies for those protected characteristics.

Fast forward to today, a complaint was filed by multiple civil society organizations (e.g. European Digital Rights, Gesellschaft für Freiheitsrechte, Global Witness, Bits of Freedom) against LinkedIn, which thus finds itself in similar hot water with the European Commission.

As you can see LinkedIn wasn’t minding a clear warning shot fired at them by the U.S. DoJ, and did not self-regulate properly. Instead the Microsoft brand continued undermining user safety, ignoring regulations, until along came an external enforcement action to land on their head.

Under the Digital Services Act (DSA), online intermediaries are required to give users more control on the use of their data, with an option for them to turn off personalised content. Companies are not allowed to use sensitive personal data such as race, sexual orientation or political opinions for their targeted ads. The Commission had in March sent a request for information to LinkedIn after the groups said the tool may allow advertisers to target LinkedIn users based on racial or ethnic origin, political opinions and other personal data due to their membership of LinkedIn groups.

LinkedIn wasn’t able to explain their ongoing position on user data and so they have announced a decision to disable a targeting “tool” they profit from. This is actually a political move, not a wise technology decision, given much better controls exist than the binary decision they made. The response indicates a lack of LinkedIn management preparedness for real world user safety needs, as the company feeds reactionary misperceptions of regulations.

The DSA requires online intermediaries to provide users with more control over their data, including an option to turn off personalised content and to disclose how algorithms impact their online experience. It also prohibits the use of sensitive personal data, such as race, sexual orientation, or political opinions, for targeted advertising.

Disabling a tool in response to an incident could have been avoided had proactive steps towards robust engineering practices been taken, prioritizing user trust. Consider if their leadership had implemented measures years ago to empower users with greater control over their data, including a transparent consent interface and clear visibility into data processing, aligning with regulatory requirements and recent high-profile enforcement actions.

These steps could and should have been implemented long ago.

Instead of taking proactive measures, they waited until their actions jeopardized user safety, now seeking recognition for disabling a tool that should never have been developed.

Addressing these issues is straightforward and essential.

It is something they should have started to plan immediately upon that DoJ enforcement of 2022, given availability of the W3C Solid protocol — based on 1989 principles to provide the necessary user safety architecture and features. The EU now has exposed LinkedIn management, because the tech giant apparently chose to ignore harms until forced by EU laws to pay attention.

Also related: It took an embarrassingly massive hack by China before users were allowed by Microsoft to see their own logs. CISA put it mildly:

Asking organizations to pay more for necessary logging is a recipe for inadequate visibility into investigating cybersecurity incidents and may allow adversaries to have dangerous levels of success in targeting American organizations.

There’s terrible irony in how hard Microsoft management has worked towards granting unsafe levels of visibility to Russia, China and advertisers… yet not their own users.

Crowdstrike has been far more direct in their statements, once they realized how Microsoft wasn’t being transparent:

Throughout our analysis, we experienced first hand the difficulties customers face in managing Azure’s administrative tools to know what relationships and permissions exist within Azure tenants, particularly with third-party partner/resellers, and how to quickly enumerate them. We found it particularly challenging that many of the steps required to investigate are not documented, there was an inability to audit via API, and there is the requirement for global admin rights to view important information which we found to be excessive. Key information should be easily accessible.

Tesla FSD Slams Into Parked Police Car, Ignoring Flares and Flashing Lights, Nearly Killing Two Officers

The Tesla robot owner (e.g. soldier of Elon Musk) admitted to doing what the CEO told them to do, following orders to handle their robot as if it has the capability to drive itself.

A Fullerton Police Department officer was investigating a fatal crash around 12:04 a.m. near Orangethorpe and Courtney Avenues, according to a department news release. The officer was managing traffic at the time and emergency flares had been placed on the road.

The officer was standing outside his patrol vehicle, with its emergency lights on, and managed to jump out of the way before the driver of a blue Tesla crashed into his car, authorities said. A police dispatcher, who was riding in the patrol vehicle, also moved out of the way of the crash. […]

The Tesla driver admitted he was operating the vehicle in self-driving mode while using his cellphone, police said.

Police car with flares deployed and flashing its lights crushed by Tesla using FSD. Source: ABC7
A driver in a Tesla Model S crashed into a police cruiser in Orange County while operating in full self-driving mode earlier Thursday morning. Source: Los Angeles Times (OC Hawk)

A driver not paying attention is exactly why Tesla was just forced by regulators to issue a huge recall. And so this crash begs the question whether that recall effort was bogus.

The federal government’s main auto safety agency said on Friday that it was investigating Tesla’s recall of its Autopilot driver-assistance system because regulators were concerned that the company had not done enough to ensure that drivers remained attentive while using the technology.

Yeah. Fraud again. Without it, there would be no Tesla. Can’t even do a recall right. How bad are they? So bad, it’s hard to believe they are even allowed in public. Tesla needs to be reclassified by regulators as a sad clown car fit only for a circus.

NHTSA said there were gaps in Tesla’s telematic data reporting on crashes involving Autopilot since the automaker primarily gets data from crashes involving air bag deployments, which account for only about one-fifth of police-reported crashes. …evidence that “Tesla’s weak driver engagement system was not appropriate for Autopilot’s permissive operating capabilities” that result in a “critical safety gap.” …”foreseeable driver misuse of the system played an apparent role.” NHTSA noted Tesla’s December recall “allows a driver to readily reverse” the software update.

Wow, Tesla has killed over 500 people so far and we’re seeing investigation of only a fifth? Imagine airplanes crashing and 80% of the time going without reporting or investigation.

The police officer fortunately defied death in this latest case by remaining very alert. He took evasive action to avoid the incoming attack by Tesla capabilities.

A Fullerton Police Department spokesperson said the officer was standing outside his vehicle around midnight when he saw a Tesla driving in his direction and not slowing down.

The officer was able to jump out of the way as the Tesla slammed into the police car, spinning the patrol vehicle around and causing major damage to its front end.

We’re waiting to hear if it was the latest software, but it really doesn’t matter. Engineers still haven’t fixed the problem that caused their first fatality using Tesla driverless software in 2016.

Think hard about that.

Even flashing lights on service vehicles hasn’t been figured out yet by Tesla, despite eight years of high profile deaths and harm to public safety from their fraud. Nearly a decade of false promises about “driverless” products, while constantly crashing into things, which only has been getting worse if you follow this blog.

Move fast and undermine democracy?

It seems entirely plausible in this context that the big Tesla 8/8 (“Heil Hitler”) Robot Nazi launch date may include the call from Elon Musk to overthrow law and order — a directive to his millions of believers to distractedly allow their robots to deploy as soldiers and increase their rate of killing American government workers such as first responders.