The title of the paper published 21 September October 2022 is ominous:
Selective neutralisation and deterring of cockroaches with laser automated by machine vision
The abstract is even more chilling
…we present a laser system automated by machine vision for neutralising and influencing the behaviour of insect pests. By performing experiments on domiciliary cockroaches, Blattella germanica, we demonstrate that our approach enables the immediate and selective neutralisation of individual insects at a distance up to 1.2 m. We further show the possibility to deter cockroaches by training them not to hide under a dark shelter through aversive heat conditioning with a low power-laser. Parameters of our prototype system can readily be tuned for applications in various situations and on different pest species like mosquitoes, locusts, and caterpillars.
Targets can be trained to not hide, so they come into field of view for “neutralisation”, and applications may include a wide variety of “species”.
The authors explain the risks they considered, but seem rather… superficial.
…we envisioned major health and safety risks that could be triggered by the use of high laser power, such as eye damage and fire ignition, which prevented the large-scale expansion of our prototype.
When I think of major risks, the first thing that comes to mind is incorrect targeting, like killing the wrong target as opposed to just injuring property or witnesses nearby. I mean data integrity should be top of every machine learning risk list, no? Very disappointed to find it missing here.