This article saying the USAF is concerned about narrow definitions of success is a great read.
In a recent test, an experimental target recognition program performed well when all of the conditions were perfect, but a subtle tweak sent its performance into a dramatic nosedive,
Maj. Gen. Daniel Simpson, assistant deputy chief of staff for intelligence, surveillance, and reconnaissance, said on Monday.
Initially, the AI was fed data from a sensor that looked for a single surface-to-surface missile at an oblique angle, Simpson said. Then it was fed data from another sensor that looked for multiple missiles at a near-vertical angle.
“What a surprise: the algorithm did not perform well. It actually was accurate maybe about 25 percent of the time,” he said.
It reminds me of 1960s IGLOO WHITE accuracy reports, let alone smart bombs of the Korean War, and how poorly general success criteria were defined (e.g. McNamara’s views on AI and the Fog of War).