Related: Whistleblower proves Microsoft chose profit over safety, leaving U.S. government vulnerable to Russian attack.
You may remember in 2022 that a social media company faced a massive formal complaint by U.S. regulators. It said the ad targeting and delivery system was illegally exploiting user data it collected, including race, religion, sex, disability, national origin, or familial status. Basically advertisers were being given unauthorized access to users’ private data in order to target people on protected characteristics or proxies for those protected characteristics.
Fast forward to today, a complaint was filed by multiple civil society organizations (e.g. European Digital Rights, Gesellschaft für Freiheitsrechte, Global Witness, Bits of Freedom) against LinkedIn, which thus finds itself in similar hot water with the European Commission.
As you can see LinkedIn wasn’t minding a clear warning shot fired at them by the U.S. DoJ, and did not self-regulate properly. Instead the Microsoft brand continued undermining user safety, ignoring regulations, until along came an external enforcement action to land on their head.
Under the Digital Services Act (DSA), online intermediaries are required to give users more control on the use of their data, with an option for them to turn off personalised content. Companies are not allowed to use sensitive personal data such as race, sexual orientation or political opinions for their targeted ads. The Commission had in March sent a request for information to LinkedIn after the groups said the tool may allow advertisers to target LinkedIn users based on racial or ethnic origin, political opinions and other personal data due to their membership of LinkedIn groups.
LinkedIn wasn’t able to explain their ongoing position on user data and so they have announced a decision to disable a targeting “tool” they profit from. This is actually a political move, not a wise technology decision, given much better controls exist than the binary decision they made. The response indicates a lack of LinkedIn management preparedness for real world user safety needs, as the company feeds reactionary misperceptions of regulations.
The DSA requires online intermediaries to provide users with more control over their data, including an option to turn off personalised content and to disclose how algorithms impact their online experience. It also prohibits the use of sensitive personal data, such as race, sexual orientation, or political opinions, for targeted advertising.
Disabling a tool in response to an incident could have been avoided had proactive steps towards robust engineering practices been taken, prioritizing user trust. Consider if their leadership had implemented measures years ago to empower users with greater control over their data, including a transparent consent interface and clear visibility into data processing, aligning with regulatory requirements and recent high-profile enforcement actions.
These steps could and should have been implemented long ago.
Instead of taking proactive measures, they waited until their actions jeopardized user safety, now seeking recognition for disabling a tool that should never have been developed.
Addressing these issues is straightforward and essential.
It is something they should have started to plan immediately upon that DoJ enforcement of 2022, given availability of the W3C Solid protocol — based on 1989 principles to provide the necessary user safety architecture and features. The EU now has exposed LinkedIn management, because the tech giant apparently chose to ignore harms until forced by EU laws to pay attention.
Also related: It took an embarrassingly massive hack by China before users were allowed by Microsoft to see their own logs. CISA put it mildly:
Asking organizations to pay more for necessary logging is a recipe for inadequate visibility into investigating cybersecurity incidents and may allow adversaries to have dangerous levels of success in targeting American organizations.
There’s terrible irony in how hard Microsoft management has worked towards granting unsafe levels of visibility to Russia, China and advertisers… yet not their own users.
Crowdstrike has been far more direct in their statements, once they realized how Microsoft wasn’t being transparent:
Throughout our analysis, we experienced first hand the difficulties customers face in managing Azure’s administrative tools to know what relationships and permissions exist within Azure tenants, particularly with third-party partner/resellers, and how to quickly enumerate them. We found it particularly challenging that many of the steps required to investigate are not documented, there was an inability to audit via API, and there is the requirement for global admin rights to view important information which we found to be excessive. Key information should be easily accessible.