A month ago I was on a call with some top security experts in the industry. We were discussing my upcoming presentation about exciting control options and data privacy from applying decentralization standards to the automotive industry.
To put it briefly I was explaining how web decentralization standards can fix growing issues of data ownership and consent in automotive technology, a fascinating problem to solve which I have spoken about at many, many conferences over the past seven years.
Here’s one of my slides from 2014, which hopefully increased awareness about automotive data ownership and consent risks:
Much to my surprise I see this issue just hit the big papers for some well-deserved attention, albeit I also see it may be for the wrong reasons.
…data isn’t the new oil, in almost any metaphorical sense, and it’s supremely unhelpful to perpetuate the analogy…
That’s just to frame the many problems with this article. Here’s another big one. The author wrote:
We’re at a turning point for driving surveillance — and it’s time for car makers to come clean…
Haha, turning point. I get it. That pun should have led to “it’s time for car makers to choose a direction”. Missed opportunity.
But seriously, the turning point for many of the issues in this article surely was years ago. He raises confidentiality and portability issues, for example. Why is now the turning point for these instead of 2014 when encryption options exploded? Or howabout 2012 when a neural net run on GPUs crushed the ImageNet competition? I see no explanation for why things are present concerns rather than past/overdue ones.
I’d say the problem is so old we’re already at the solutions phase, long past the identification and criticism.
I had help doing a car privacy autopsy from Jim Mason, a forensic engineer. That involved cracking open the dashboard to access just one of the car’s many computers. Don’t try this at home — we had to take the computer into the shop to get repaired.
Sigh. Please do try this at home.
Right to repair is a very real facet of this topic. Cracking a dashboard for access is also very normal behavior and more people should be doing it.
When I volunteered my own garage space in the Bay Area, for example, I saw the reverse effect. Staff of several automotive companies came to join random people of the city in some good old community cracking of dashboards.
A guy from [redacted automotive company] said “…what do you mean you don’t bring rental cars to take apart and hack for a day? You should target ours and tell us about it.” Yikes. That’s not ethical.
The 1970s “hot-rod” culture in today’s terms is a bunch of us sitting around with disassembled junkyard parts in a controlled garage (not operational rental/borrowed cars on the street!) and our clamps on wires etc to linux laptops deciphering CANbus codes.
This journalist desperately needs to participate sometime in a local car hacking community or at least read “Zen and the Art of Motorcycle Maintenance”….
It should not be hard for a machine owner to crack it open, when market regulations are working right. At least the journalist did not say an “idiot light” forced him to take his computer to the manufacturer for help.
Anyway, back to the point, the data models in automotive need to adopt decentralization standards if they want to solve for data ownership issues raised in this story.
But for the thousands you spend to buy a car, the data it produces doesn’t belong to you. My Chevy’s dashboard didn’t say what the car was recording. It wasn’t in the owner’s manual. There was no way to download it.
To glimpse my car data, I had to hack my way in.
In summary, data is not the new oil, right to repair means healthy markets trend towards hardware access made easy, and concerns about confidentiality and portability of data in cars are being addressed with emerging decentralization standards.
Sorry this article may not come with a viral click-bait title, but I’m happy anytime to explain in much more detail how technical solutions are emerging already to solve data ownership concerns for cars and give examples with working code.
Here’s a shocking revelation: crosswalks don’t protect pedestrians.
As you maybe read here before when I joked about the fantasy crime called “jaywalking”, or wrote about cultural disparities in road safety, crosswalks are an unfair conspiracy by American car manufacturers that removed non-motorized forms of transportation (including pedestrians and especially women on bicycles) from the road.
Creating crosswalks and enforcing them has been by their nature extremely political acts.
It turns out that the car you drive is a surprisingly reliable proxy for your income level, your education, your occupation, and even the way you vote in elections.
Using cars as a proxy for power (enabling privilege and holding down the poor) is an inversion of what was supposed to happen with “freedom” of movement in America.
If you read the history of stop-lights in 1860s London, for example, a red light and an arm lowered to inform cars to stop being a threat. That’s right, stop-lights were initially designed (just thirty years after the concept of police were invented by Robert Peel) to allow pedestrians to move about freely. Somehow that concept was completely flipped to where pedestrians were pushed into a box (and harassed by police).
Consider how a lack of crosswalk, “ridiculously missing” as some would say, even has been linked to intentional unequal treatment of city residents.
Police detaining and questioning people for not using crosswalks (see points above) repeatedly has proven to be racist, to top it all off.
In brief, if you see a lot of cars on roads and few bicycles, check your value system for being anti-American, let alone anti-humanitarian.
Car manufacturers conspired through crosswalk lobbying to shift all rights away from residents in order to force expensive cars to be purchased for “freedom” to move about safely.
This devious plot runs so thick, Uber allegedly emphasized to its drivers that it would be better to sit in crosswalks to pick up passengers. The logic is they don’t care about blocking pedestrians, but do care about blocking other cars (note some US states also have laws encouraging this anti-pedestrian move).
Also worth noting is the flagship propaganda from Tesla this year has been bulletproof oversized trucks better suited for war zones where freedoms are missing than the public spaces of streets originally encouraging freedom of human movement and play.
Given the American context of turning streets into corporate-controlled death zones, the problem has been bleeding into Canada’s famous culture of “niceness”.
Thus Quebec has posted a video of crosswalks attempting to physically stop cars by telling them to be more polite to others:
It begs the question what damage or fine would be for running over the pop-ups, as they don’t seem to be designed (aside from the surprise) in a way that cars incur cost for disobeying them.
It also reminds me of the Ukrainian art experiment in 2011 (regularly featured in my talks as an example test for driverless car engineering) that popped up human-shaped balloons in crosswalks to stop speeding cars (triggered by a radar gun).
What if these pop-ups in Quebec were shaped like humans instead of just rectangles? That would be an even greater surprise with more psychological deterrence.
However it seems the Quebec design is more of an art experiment for shock/suggestion and education than a real safety control, and on that note the pop-ups could be a lot more creative and shocking.
I mean if you’re going to pop-up a bunch of columns, how about make the columns rise and to a scale that represents the increasing death rate of pedestrians year-over-year from cars? Then stick a “stop killing our kids” message on that barrier…as Small Wars Journal has illustrated:
This breach started with a physical break-in November 17th and those affected didn’t hear about it for nearly a month, until December 13th.
The break-in happened on Nov. 17, and Facebook realized the hard drives were missing on Nov. 20, according to the internal email. On Nov. 29, a “forensic investigation” confirmed that those hard drives included employee payroll information. Facebook started alerting affected employees on Friday Dec. 13.
The company didn’t notice hard drives with unencrypted data missing for half a week, which itself is unusual. The robbery was on a Sunday, and they reported it only three days later on a Wednesday.
Then it was another long two weeks after the breach, on a Friday, when someone finally came forward to say that these missing drives stored unencrypted sensitive personal identity information.
This is like reading news from ten years ago, when large organizations still didn’t quite understand or practice the importance of encryption, removable media safety and quick response. Did it really happen in 2019?
It sounds like someone working at Facebook either had no idea unencrypted data on portable hard drives is a terrible idea, or they were selling the data.
The employee who was robbed is a member of Facebook’s payroll department, and wasn’t supposed to have taken the hard drives outside the office.
“Wasn’t supposed to have taken…” is some of the weakest security language I’ve heard from a breached company in a long time. What protection and detection controls were in place? None?
Years ago there was a story about a quiet investigation at Facebook that allegedly discovered staff were pulling hard-drives out of datacenters, flying them to far away airports and exchanging them for bags of money.
Of course many other breaches have proven how internal staff who observe weak security leadership may attempt to monetize data they can access, whether users or staff.
The man accused of stealing customer data from home mortgage lender Countrywide Financial Corp. was probably able to download and save the data to an external drive because of an oversight by the company’s IT department.
I also think we shouldn’t wave this Facebook story off as just involving 30,000 staff data instead of the more usual customer data.
First, staff often are customers too. Second, when you’re talking tens of thousands of people impacted, that’s a significant breach and designating them as staff versus user is shady. Breach of personal data is a breach.
And there’s plenty of evidence that stolen data when found on unencrypted drives, regardless of whose data it is, can be sold on an illegal market.
This new incident however reads less like that kind of sophisticated insider threat and more like the generic sloppy security that used to be in the news ten years ago.
Kaiser Permanente officials said the theft occurred in early December after an employee left the drive inside the car at her home in Sacramento. A week after the break-in, the unidentified employee notified hospital officials of the potential data breach.
Regardless of whether a insider threat, a targeted physical attack, or just disappointing sloppy management practices and thoughtless staff…Facebook’s December 13 notice of a November 17 breach seems incredibly slow for 2019 given GDPR, and the simple fact everyone should know that notifications are meant to be within three days.
1:45 P.M. “Amerika” passed two large icebergs in 41.27 N., 50.8 W.
9:40 P.M. From “Mesaba” to “Titanic” and all east-bound ships: Ice report in latitude 42º N. to 41º 25’ N., longitude 49º W to longitude 50º 30’ W. Saw much heavy pack ice and great number large icebergs. Also field ice. Weather good, clear.
11:00 P.M. Titanic begins to receive a sixth message about ice in the area, and radio operator Jack Phillips cuts it off, telling the operator from the other ship to “shut up.”
New legal research moves us closer towards holding social media executives criminally liable for the Rohingya crisis and other global security failures under their watch:
…this paper argues that it may be more productive to conceptualise social media’s role in atrocity crimes through the lens of complicity, drawing inspiration not from the media cases in international criminal law jurisprudence, but rather by evaluating the use of social media as a weapon, which, under certain circumstances, ought to face accountability under international criminal law.
The Guardian gave a scathing report of how Facebook was used in genocide:
Hate speech exploded on Facebook at the start of the Rohingya crisis in Myanmar last year, analysis has revealed, with experts blaming the social network for creating “chaos” in the country. […] Digital researcher and analyst Raymond Serrato examined about 15,000 Facebook posts from supporters of the hardline nationalist Ma Ba Tha group. The earliest posts dated from June 2016 and spiked on 24 and 25 August 2017, when ARSA Rohingya militants attacked government forces, prompting the security forces to launch the “clearance operation” that sent hundreds of thousands of Rohingya pouring over the border. […] The revelations come to light as Facebook is struggling to respond to criticism over the leaking of users’ private data and concern about the spread of fake news and hate speech on the platform.
The New Republic referred to Facebook’s lack of security controls at this time as a boon for dictatorships:
[U.N. Myanmar] Investigator Yanghee Lee went further, describing Facebook as a vital tool for connecting the state with the public. “Everything is done through Facebook in Myanmar,” Lee told reporters…what’s clear in Myanmar is that the government sees social media as an instrument for propaganda and inciting violence—and that non-government actors are also using Facebook to advance a genocide. Seven years after the Arab Spring, Facebook isn’t bringing democracy to the oppressed. In fact…if you want to preserve a dictatorship, give them the internet.
[United Nations investigators September report] called for the Myanmar army top brass to be prosecuted for genocide, labeled Facebook’s response as “slow and ineffective.” As FRONTLINE has reported, Facebook representatives were warned as early as 2015 about the potential for a dangerous situation in the nascent democracy. In November, Facebook executive Alex Warokfa admitted in a blog post that the company did not do enough to prevent the platform “from being used to foment division and incite offline violence”…
And the UK House of Commons in 2018 reported how Facebook was classified by the UN as “a determining role in stirring up hatred against the Rohingya Muslim minority” with the UN Myanmar investigator calling it the “‘beast’ that helped to spread vitriol”.
The CTO of Facebook, Mike Schroepfer described the situation in Burma as “awful”, yet Facebook cannot show us that it has done anything to stop the spread of disinformation against the Rohingya minority. […] Facebook is releasing a product that is dangerous to consumers and deeply unethical.
It seems important when looking back at this time-frame to note that a key Facebook executive at the head of decisions about user safety was in just his second year ever as a “chief” of security.
He infamously had taken his first ever Chief Security Officer (CSO) job at Yahoo in 2014, only to leave that post abruptly and in chaos in 2015 (without disclosing some of the largest privacy breaches in history) to join Facebook.
August 2017 was the peak period of risk, according to the analysis above. The Facebook CSO launched a “hit back” PR campaign two months later in October to silence the growing criticisms:
Stamos was particularly concerned with what he saw as attacks on Facebook for not doing enough to police rampant misinformation spreading on the platform, saying journalists largely underestimate the difficulty of filtering content for the site’s billions of users and deride their employees as out-of-touch tech bros. He added the company should not become a “Ministry of Truth,” a reference to the totalitarian propaganda bureau in George Orwell’s 1984.
His talking points read like a sort of libertarian screed, as if he thought journalists are ignorant and would foolishly push everyone straight into totalitarianism with their probing for basic regulation, such as better editorial practices and the protection of vulnerable populations from harms.
Think of it like this: the chief of security says it is hard to block Internet traffic with a firewall because it would lead straight to shutting down the business. That doesn’t sound like a security leader, it sounds like a technologist that puts making money above user safety (e.g. what the Afghanistan Papers call profitability of war).
Indeed, a PR firm was hired by Facebook to peddle antisemitic narratives to discredit critics – dangerous propaganda methods were used to undermine those reporting facilitation of dangerous propaganda.
It was so obviously unethical and insecure that people at Facebook who cared…quit and said they would no longer be associated with the security team.
…Binkowski said she tried to raise concerns about misuse of the platform abroad, such as the explosion of hate speech and misinformation during the Rohingya crisis in Myanmar and other violent propaganda. “I was bringing up Myanmar over and over and over,” she said. “They were absolutely resistant.” Binkowski, who previously reported on immigration and refugees, said Facebook largely ignored her: “I strongly believe that they are spreading fake news on behalf of hostile foreign powers and authoritarian governments as part of their business model.”
Facebook’s top-leadership was rejecting experienced voices of reason, instead rolling out angry “shame” statements to reject any criticisms about lack of progress on safety.
Stamos appeared to be expressing that for him to do anything more than what he saw as sufficient in that crucial time would be so hard that journalists (ironically the most prominent defenders of free speech, the people who drive transparency) couldn’t even understand if they saw it.
My suggestion for journalists is to try to talk to people who have actually had to solve these problems and live with the consequences.
To me that reads like the CSO saying his staff suffer when they have to work hard, calling journalists stupid for not talking with anyone.
Such a patronizing and tone-deaf argument is hard to witness. It’s truly incredible to read, especially when you consider nearly 800,000 Rohingya fleeing for their lives while a Facebook executive says consequences matter.
Warning: extremely graphic and violent depictions of genocide
Here’s another way to keep the Facebook “hit back” campaign against journalists in perspective. While the top executive in security was calling people closest to real-world consequences not expert enough on that exact topic, he himself didn’t bring any great experience or examples to the table to earn anyone’s trust.
A person with knowledge of Facebook’s [2015] Myanmar operations was decidedly more direct than [Facebook vice president of public policy] Allen, calling the roll out of the [security] initiative “pretty fucking stupid.” […] “When the media spotlight has been on there has been talk of changes, but after it passes are we actually going to see significant action?” [Yangon tech-hub Phandeeyar founder] Madden asks. “That is an open question. The historical record is not encouraging.”
The “safety dial was pegged in the wrong direction” as some journalists put it back in 2017 under a CSO who apparently thought it good idea to complain how hard it has been to protect people from harm (while making huge revenues). Perhaps business schools soon may study Facebook’s erosion of global trust under this CSO’s leadership:
We know tragically today that journalists were repeatedly right in their direct criticism of Facebook security practices and in their demands for greater transparency. We also plainly see how an inexperienced CSO’s personal “hit back” at his critics was wrong, with its opaque promises and patronizing tone based on his fears of an Orwellian fiction.
Facebook has been and continues to be out-of-touch with basic social science. Facebook was and continues to resist safety controls on speech that protect human rights, and has continued saying it is committed to safety while arguing against norms of speech regulation.
The question increasingly is whether actions like an aggressive “hit back” on people warning of genocide at a critical moment of risk (arguing it is hard to stop Facebook from being misused as a weapon and pushing back on criticism of the use of Facebook as a weapon) makes a “security” chief criminally liable.
My sense is it will be anthropologists, experts in researching baselines of inherited rights within relativist frameworks, who emerge as best qualified to help answer questions of what’s an acceptable vulnerability in social media technology.
The personal, social, and material harms our participants experienced have real consequences for who can participate in public life. Current laws and regulations allow digital platforms to avoid responsibility for content…. And if online spaces are truly going to support democracy, justice, and equality, change must happen soon.
Accountability of a CSO for atrocity crimes during his watch appears to be the most logical change, and a method of reasoned enforcement, if I’m reading these human rights law documents right.
January 15, 2019 incident shows the attackers opened a Facebook account and used it in their planning to the last day of the raid. On the account, they exchanged ideas on the best weapons to use and how to use them in executing the mission. …the terrorists had avoided using mobile phones in their communication and shifted to Facebook to coordinate the mission.
Snyder, a Republican, was governor from 2011 through 2018. The former computer executive pitched himself as a problem-solving “nerd” who eschewed partisan politics and favored online dashboards to show performance in government. Flint turned out to be the worst chapter of his two terms due to a series of catastrophic decisions that will affect residents for years. The date of Snyder’s alleged crimes in Flint is listed as April 25, 2014, when a Snyder-appointed emergency manager who was running the struggling, majority Black city carried out a money-saving decision to use the Flint River for water while a pipeline from Lake Huron was under construction. The corrosive water, however, was not treated properly and released lead from old plumbing into homes. Despite desperate pleas from residents holding jugs of discolored, skunky water, the Snyder administration took no significant action until a doctor reported elevated lead levels in children about 18 months later.
3) Opinion piece in the Toronto Star: “Facebook doesn’t need to ‘do better’ … it needs to do time”
Section 83.19(1) of our Criminal Code says that knowingly facilitating a terrorist offence anywhere in the world, even indirectly, is itself a terrorist offence.
…while the US sends hundreds of thousands of poor people to prison every year, high-level corporate executives, with only the rarest of exceptions, have become effectively immune from any meaningful prosecution for crimes committed on behalf of their companies.