Humans often get cited for being influenced by drugs or alcohol when they make terrible decisions.
Autonomous vehicles (AV), on the other hand, need no drugs or alcohol to make such bad decisions.
This often gets reported as an AV isn’t at risk from drinking, while in reality the risk is from… influences.
And it turns out the AV is far more susceptible to influences than humans.
Tesla owners, for example, often seem to think they should train the company’s “automation” products to be in a huge rush and ignore all traffic laws: stop signs, red lights and yellow lines taught as optional in their world of influence.
That’s far worse than normal with one human drinking too much, or even dozens, because the bad behavior of Tesla’s “evil insiders” gets ingested into an entire AV fleet as free speed extremism… a huge robot army in permanent improvisation more dangerous than drugs or alcohol.
Serious food for thought when The Verge reports:
…there is scant data that proves that fully automated vehicles are safer than human drivers.
There is plenty of evidence that fully automated vehicles are easily put under the influence… of almost anything.
It’s like how TayBot was turned into a Nazi within 24 hours… which I’ve explained in great detail here before as a really dumb design flaw by Microsoft.
At least humans are influenced by known things and can be easily tested to determine how dangerous they will be attempting to operate heavy machinery.
The next time a Tesla crashes and someone reports a driver was given a standardized transparent test of influences, demand to know what the AV software was tested for after the crash, how and by who.
Think of it like this. Microsoft disabled their bot after one day. Uber cancelled their bot after one pedestrian was killed. Tesla turned their bot into a pay-to-play for the wealthy to flaunt safety laws and dangerously influence it, such that more and more and more people are dead.