Having a way to disable a malfunctioning, let alone malicious, robot is absolutely essential to basic human rights (good governance).
Artificial intelligence can help decide whether you get a job, bank loan or housing — but such uses of the technology could soon be limited in California. Regulations proposed today would allow Californians to opt out of allowing their data to be used in that sort of automated decision making. The draft rules, floated by the California Privacy Protection Agency, would also let people request information on how automated decisions about them were made.
What’s missing from this analysis is two-fold.
- Opt-out is framed as disable, such as complete shutdown, without the more meaningful “reset” as a path out of danger. Leaving a service where there is nothing behind is one thing, and also highly unlikely/impractical due to “necessary” exceptions and clauses. Leaving a bunch of mistakes behind is another. The Agency should be planning for a reset even more than trying to enforce a tempting but usually false promise of a hard shutdown. This has been one of the hidden (deep in the weeds) lessons of GDPR.
- Letting people request their information on automation decisions is backwards. With AI processing on a Solid Pod (distributed personal data store) these requests would be made to the person instead of from them. Even with the opportunity to chase your data all over the place, people are far better off achieving the same end without being saddled with a basically impossible and expensive task of finding everyone everywhere making decisions about them without their consent.
See also: Italy