After Years of Targeted Ads LinkedIn Busted for Ignoring User Consent Requirements

Related: Whistleblower proves Microsoft chose profit over safety, leaving U.S. government vulnerable to Russian attack.

You may remember in 2022 that a social media company faced a massive formal complaint by U.S. regulators. It said the ad targeting and delivery system was illegally exploiting user data it collected, including race, religion, sex, disability, national origin, or familial status. Basically advertisers were being given unauthorized access to users’ private data in order to target people on protected characteristics or proxies for those protected characteristics.

Fast forward to today, a complaint was filed by multiple civil society organizations (e.g. European Digital Rights, Gesellschaft für Freiheitsrechte, Global Witness, Bits of Freedom) against LinkedIn, which thus finds itself in similar hot water with the European Commission.

As you can see LinkedIn wasn’t minding a clear warning shot fired at them by the U.S. DoJ, and did not self-regulate properly. Instead the Microsoft brand continued undermining user safety, ignoring regulations, until along came an external enforcement action to land on their head.

Under the Digital Services Act (DSA), online intermediaries are required to give users more control on the use of their data, with an option for them to turn off personalised content. Companies are not allowed to use sensitive personal data such as race, sexual orientation or political opinions for their targeted ads. The Commission had in March sent a request for information to LinkedIn after the groups said the tool may allow advertisers to target LinkedIn users based on racial or ethnic origin, political opinions and other personal data due to their membership of LinkedIn groups.

LinkedIn wasn’t able to explain their ongoing position on user data and so they have announced a decision to disable a targeting “tool” they profit from. This is actually a political move, not a wise technology decision, given much better controls exist than the binary decision they made. The response indicates a lack of LinkedIn management preparedness for real world user safety needs, as the company feeds reactionary misperceptions of regulations.

The DSA requires online intermediaries to provide users with more control over their data, including an option to turn off personalised content and to disclose how algorithms impact their online experience. It also prohibits the use of sensitive personal data, such as race, sexual orientation, or political opinions, for targeted advertising.

Disabling a tool in response to an incident could have been avoided had proactive steps towards robust engineering practices been taken, prioritizing user trust. Consider if their leadership had implemented measures years ago to empower users with greater control over their data, including a transparent consent interface and clear visibility into data processing, aligning with regulatory requirements and recent high-profile enforcement actions.

These steps could and should have been implemented long ago.

Instead of taking proactive measures, they waited until their actions jeopardized user safety, now seeking recognition for disabling a tool that should never have been developed.

Addressing these issues is straightforward and essential.

It is something they should have started to plan immediately upon that DoJ enforcement of 2022, given availability of the W3C Solid protocol — based on 1989 principles to provide the necessary user safety architecture and features. The EU now has exposed LinkedIn management, because the tech giant apparently chose to ignore harms until forced by EU laws to pay attention.

Also related: It took an embarrassingly massive hack by China before users were allowed by Microsoft to see their own logs. CISA put it mildly:

Asking organizations to pay more for necessary logging is a recipe for inadequate visibility into investigating cybersecurity incidents and may allow adversaries to have dangerous levels of success in targeting American organizations.

There’s terrible irony in how hard Microsoft management has worked towards granting unsafe levels of visibility to Russia, China and advertisers… yet not their own users.

Crowdstrike has been far more direct in their statements, once they realized how Microsoft wasn’t being transparent:

Throughout our analysis, we experienced first hand the difficulties customers face in managing Azure’s administrative tools to know what relationships and permissions exist within Azure tenants, particularly with third-party partner/resellers, and how to quickly enumerate them. We found it particularly challenging that many of the steps required to investigate are not documented, there was an inability to audit via API, and there is the requirement for global admin rights to view important information which we found to be excessive. Key information should be easily accessible.

Tesla FSD Slams Into Parked Police Car, Ignoring Flares and Flashing Lights, Nearly Killing Two Officers

The Tesla robot owner (e.g. soldier of Elon Musk) admitted to doing what the CEO told them to do, following orders to handle their robot as if it has the capability to drive itself.

A Fullerton Police Department officer was investigating a fatal crash around 12:04 a.m. near Orangethorpe and Courtney Avenues, according to a department news release. The officer was managing traffic at the time and emergency flares had been placed on the road.

The officer was standing outside his patrol vehicle, with its emergency lights on, and managed to jump out of the way before the driver of a blue Tesla crashed into his car, authorities said. A police dispatcher, who was riding in the patrol vehicle, also moved out of the way of the crash. […]

The Tesla driver admitted he was operating the vehicle in self-driving mode while using his cellphone, police said.

Police car with flares deployed and flashing its lights crushed by Tesla using FSD. Source: ABC7
A driver in a Tesla Model S crashed into a police cruiser in Orange County while operating in full self-driving mode earlier Thursday morning. Source: Los Angeles Times (OC Hawk)

A driver not paying attention is exactly why Tesla was just forced by regulators to issue a huge recall. And so this crash begs the question whether that recall effort was bogus.

The federal government’s main auto safety agency said on Friday that it was investigating Tesla’s recall of its Autopilot driver-assistance system because regulators were concerned that the company had not done enough to ensure that drivers remained attentive while using the technology.

Yeah. Fraud again. Without it, there would be no Tesla. Can’t even do a recall right. How bad are they? So bad, it’s hard to believe they are even allowed in public. Tesla needs to be reclassified by regulators as a sad clown car fit only for a circus.

NHTSA said there were gaps in Tesla’s telematic data reporting on crashes involving Autopilot since the automaker primarily gets data from crashes involving air bag deployments, which account for only about one-fifth of police-reported crashes. …evidence that “Tesla’s weak driver engagement system was not appropriate for Autopilot’s permissive operating capabilities” that result in a “critical safety gap.” …”foreseeable driver misuse of the system played an apparent role.” NHTSA noted Tesla’s December recall “allows a driver to readily reverse” the software update.

Wow, Tesla has killed over 500 people so far and we’re seeing investigation of only a fifth? Imagine airplanes crashing and 80% of the time going without reporting or investigation.

The police officer fortunately defied death in this latest case by remaining very alert. He took evasive action to avoid the incoming attack by Tesla capabilities.

A Fullerton Police Department spokesperson said the officer was standing outside his vehicle around midnight when he saw a Tesla driving in his direction and not slowing down.

The officer was able to jump out of the way as the Tesla slammed into the police car, spinning the patrol vehicle around and causing major damage to its front end.

We’re waiting to hear if it was the latest software, but it really doesn’t matter. Engineers still haven’t fixed the problem that caused their first fatality using Tesla driverless software in 2016.

Think hard about that.

Even flashing lights on service vehicles hasn’t been figured out yet by Tesla, despite eight years of high profile deaths and harm to public safety from their fraud. Nearly a decade of false promises about “driverless” products, while constantly crashing into things, which only has been getting worse if you follow this blog.

Move fast and undermine democracy?

It seems entirely plausible in this context that the big Tesla 8/8 (“Heil Hitler”) Robot Nazi launch date may include the call from Elon Musk to overthrow law and order — a directive to his millions of believers to distractedly allow their robots to deploy as soldiers and increase their rate of killing American government workers such as first responders.

Total Recall After Waymo Robot Taxi Crashes Into Pole

A crash into a pole by the Waymo driverless vehicle has prompted a second recall of the entire taxi fleet.

The company is under investigation by NHTSA for over two dozen incidents involving its driverless vehicles, including several “single-party” crashes and possible traffic law violations. Several incidents involved crashes with stationary objects, much like the May 21st crash with the telephone poll. […]

Waymo’s recall was deployed at the company’s depot by its team of engineers, not through an over-the-air software update.

Pole position usually refers to a series of Tesla crashes. Now we can add one Waymo.

Who Will be the Harry Markopolos of Tesla?

Harry mathematically proved that Bernie Madoff was a fraud, and repeatedly delivered the proof to the SEC until they finally did something.

The clock ticks far too long on Tesla. Why isn’t Elon Musk going to jail? Who will be the Harry Markopolos of today?

For just one out of hundreds of examples, ZEV credits were based on range so Tesla lied egregiously about theirs. This intentionally engineered fraud soaked up billions of unearned credits, and then customers realized only too late that they had paid into a scam. Tesla could owe the U.S. government return of those bilked billions.

Meanwhile Tesla is trying to claim that when it miscalculates, such as severance to the staff that it laid off, it will sue the unemployed for every single penny.

Coffeezilla, where are you? You covered the X token already. Remember X token?