Robots are essentially created to serve as surveillance devices that monitor and interact with their surroundings. This makes them susceptible to being utilized as informants by law enforcement agencies.
Information gathering (IG) algorithms aim to intelligently select the mobile robotic sensor actions required to efficiently obtain an accurate reconstruction of a physical process, such as an occupancy map, a wind field, or a magnetic field.
Presently, we observe that sizable companies with robots, supposedly on a “trial” period in San Francisco (pun unintended), have had some of them taken over by the police. The most important question is now looming larger than ever: can these robotic informants be deemed reliable? Or more to the point, What protections from abuse are available?
In addition to the San Francisco homicide, Bloomberg’s review of court documents shows police have sought footage from Waymo and Cruise to help solve hit-and-runs, burglaries, aggravated assaults, a fatal collision and an attempted kidnapping.
In all cases reviewed by Bloomberg, court records show that police collected footage from Cruise and Waymo shortly after obtaining a warrant. In several cases, Bloomberg could not determine whether the recordings had been used in the resulting prosecutions; in a few of the cases, law enforcement and attorneys said the footage had not played a part, or was only a formality. However, video evidence has become a lynchpin of criminal cases, meaning it’s likely only a matter of time.
It’s not surprising, but it’s important to note that 90% of engineers in America lack education or training in humanities and political science. Furthermore, none of them have signed a code of ethics.
This has practical implications, as many American engineering teams have not adequately designed robots to handle complex decisions regarding becoming a police informant.
While people often discuss typical security risks like privacy loss, I haven’t seen anyone addressing the fundamental issue concerning robots. Here’s a starting point for the correct analysis.
“Whenever you have a company that collects a large amount of data on individuals, the police are eventually going to come knocking on their door hoping to make that data their evidence,” Guariglia said.
Exactly right. This too.
For those who say it doesn’t matter if police have access to footage because they aren’t doing anything wrong, Guariglia says, “you have no idea what you’re doing wrong.”
“People in a lot of states where it was legal to get an abortion a few months ago suddenly have to live in fear that any day now, these states could retroactively prosecute people,” he said. “And then you start to wonder about all those months where you traveled to your doctor or mental health specialist, how much data had been collected and what can law enforcement learn about me when I didn’t think I had anything to hide?”
This perspective comes from Matthew Guariglia, a policy analyst at the Electronic Frontier Foundation (EFF), a libertarian organization that strongly advocates for privacy rights.
The EFF primarily focuses on privacy-related issues, which sometimes leads to unintended consequences and societal challenges.
While privacy is frequently discussed, it’s essential to recognize that data integrity can be equally important. Journalists often refer to cases that were solved or hindered due to the presence or absence of accurate data.
However, it’s crucial to highlight the potential risks associated with data integrity abuse, which unfortunately receives little attention in the media.
When discussing privacy, the concern is about individuals losing their personal information, a concept widely understood. However, in terms of data integrity, the worry is that someone might manipulate or poison the data itself.
Consider the hypothetical scenario where a company like Waymo dislikes a politician and provides false data to the police to manipulate the political landscape. Or imagine a Waymo employee exploiting their access to corrupt data in order to frame their superior and have them arrested.
The significant flaw in these robotic companies lies in their insufficient safeguards against data poisoning attacks.
The proper assessment of risk models for robot implementation is often neglected or poorly executed.
This is why the EFF emphasizes privacy concerns, while companies continue to emphasize their commitment to privacy through PR campaigns. However, those of us involved in actual security and addressing real-world harms are more focused on the trustworthiness of the data.
I have personally experienced the numerous issues with the latest surveillance systems through my cameras in San Francisco. These problems are deeply rooted in American history, yet they are not adequately covered in the media.
This lack of exposure is unfortunate because without greater awareness, we may realize the urgency of addressing the colossal challenges related to data integrity too late.
The risk extends beyond privacy loss; integrity loss poses an equally significant if not greater threat to society. Waymo, for instance, could exploit high levels of privacy, satisfying privacy extremists like the EFF, while undermining democracy by compromising data integrity.