A new report called “Can Emerging Technologies Lead a Revival of Conflict Early Warning/Early Action? Lessons from the Field” seems to have this buried lede:
It is worth noting that dealing with misinformation and disinformation in the EWEA field is labor intensive and relies on human interpreters. So far, we have found no models that use automated methods to screen out some of the bad information in order to lighten the workload of human reviewers. This is an area for potential future growth
No automated methods in any models?
Humans (e.g. journalists) are required to filter signal from noise?
Those points suggest the field believes no technology exists for safely automating data integrity checks, despite being absolutely essential to scaling EWEA technology.
It’s a very different message from security advocates (included in some of my presentations) purporting to do exactly this kind of work, such as the “AI” pandemic watch systems that completely missed Ebola because they couldn’t understand non-English communication. Or the Facebook CSO who from an ivory tower in Silicon Valley infamously claimed to understand the problems with monitoring for genocide signals better than people in the field reporting about it.
This report thus casts a shadow on companies that have long argued they were somehow capable of processing global misinformation centrally at massive scale (e.g. instead they have likely been facilitating atrocity crimes).
Very important to note field models aren’t yet accepting emerging technology even when it is being developed to solve their exact problem. That’s a finding that needs to be called out prominently because it’s a huge opportunity for emerging technology, especially humanitarians working in the security profession, to think more seriously about.