“Humility Makes a Great Intelligence Officer”

Marc Polymeropoulous has published a new leadership book. He describes it as a revelation about his failures in the first two thirds of his career, which he then credits for making him into a good leader in the last third of his career.

These things stuck out for me in his CSPC interview (118 views):

He refers to intelligence like going to bat in baseball, where a .300 hitting average is great even though we know it’s a 70% rate of being wrong. I actually like this as I tend to define intelligence, especially artificial intelligence, as the ability to hit a target.

An example he gives of this is chilling, however, since Marc’s best agent in Afghanistan was tortured and killed because of a very simple and predictable operations mistake.

Maybe then the 70% failure rate in baseball is not really applicable more widely in other fields, especially high risk ones. Instead such standards of quality in sports are ok to be low because by design outcomes shouldn’t really matter — it’s just a game.

As Calvin and Hobbes put it a long time ago, next to their snowman made from only two balls, “the fastest route to success is lowering expectations”.

Something tells me the .300 might need to go way up to a number more like .900 when lives are on the line every day, because nobody should want to go to bat repetitively knowing it has 70% chance of death! But who in history has ever batted .900? That’s where gaining an upper hand using modern information warfare comes in, right?

Even more confusing is “employ the dagger”, a notion Marc offers the audience as evidence that “competition is good”. He says when officers did something desired he would award them a physical token of appreciation, a souvenir knife he’d buy at the market for $10.

In corporate circles this is a well-known tactic. Give people a snow globe after x years of showing up to work and they’ll work more, right?

In baseball I guess this is the idea that a coach could give out a $10 trophy bat for hitting a ball, instead of expecting the team to find enough satisfaction in achieving a 70% failure rate.

It’s thus interesting to see him conflate a well-tread concept of internal appreciation and reward systems to affect morale with a raw “good” of competition. What if the competition is toxic because teams are in fact killing each other instead of targets?

Something obviously sounds not right about generic praise of competition. Perhaps Marc is using some kind of over-compensation act (surrounded by hyper-competitive personalities in the killing fields) to cover up his softer anti-competitive leadership messages of inclusion and unity.

He’s obviously a master at fitting in. I wonder if him floating these ideas about 1) pointy sharp tool and 2) competition is meant to disguise his true message that is rather blunt and collaborative.

There is neither any uniqueness or shortage for such inexpensive daggers (hey, even I have at least TWO from my time in… Nepal) nor any real scarcity to his approvals. Competition for an award and attention isn’t a fair description of any system that could operate just fine without using any competition (with each other) at all.

He goes on to say his leadership success grew by including more people into communications, a larger tent for collaboration. This suggests he valued the opposite of pressing everyone into competition (e.g. bringing finance officers and kitchen chefs into his planning).

From there he digs further and hits the empathy button hard, which takes the listener further from his opening salvo on competition culture.

In other words, he’s basically saying competition works if people are kind instead of selfish, aligned instead of oppositional.

His big tent mindset (he calls out nostalgically for everyone in competition to hold a sense of unity) combined with his points on empathy and repetitive reference to humility being a core ingredient for great intelligence… it all seems his love of competition comes with some pretty huge caveats.

Humility is presented as competitive advantage in intelligence — hitting a target without pride or arrogance, just like some African tribes have advocated for thousands of years.

When a young man kills much meat, he comes to think of himself as a chief or a big man – and thinks of the rest of us as his servants or inferiors. We can’t accept this … so we always speak of his meat as worthless. This way, we cool his heart and make him gentle.

While Marc’s leadership advice sounds good in principle, there’s a moment in the interview where he boasts about delivering news of a successful assassination. Oops.

Such hubris about targeted hits not only is a swing and a miss according to his own doctrine, it’s unfortunately a widespread problem he could be making worse. Just look at some of the latest American Army PSYOP and Department of State messaging (while Frank Church rolls in his grave).

In all seriousness, how would baseball exist if hyper-competitive batters could assassinate a pitcher, sort of like how Walter “Steel Arm” Dickey was killed in 1923 with a dagger?

By the time he was 17, he was a pitcher who threw so hard and fast that he gained the nickname that followed him the rest of his life. […] “He was as good as I ever saw throw a baseball,” Roy recalled to James, “I remember one time that Steel Arm brought in his whole team to the dugout. No one was left except him and the catcher. He then struck out three straight men, daring them all the time to hit. They could not do it.

Such questions about lawfulness in competition bring to mind the lessons behind a song called “Move on Up” as well as a film called The Rubble Kings.

Humility in the context of American history in fact might serve as an excellent gateway into discussing rule of law being far more important to real success in intelligence than arbitrary displays of power.

As an afterthought, has anyone at the NSA ever said humility is a good thing?

China Claims U.S. Aircraft Carriers Can’t Hide From New AI on Satellites

A new report out of China suggests it’s using AI on satellites to find and constantly track U.S. aircraft carriers, rendering them easy prey.

When USS Harry S. Truman was heading to a strait transit drill off the coast of Long Island in New York on June 17 last year, a Chinese remote sensing satellite powered by the latest artificial intelligence technology automatically detected the Nimitz-class aircraft carrier and alerted Beijing with the precise coordinates, according to a new study by Chinese space scientists.

This is a long predicted shift from “on the ground” processing power for analysis, working through logistics issues (e.g. bandwidth performance, data integrity), to instead doing “real time” analysis on-board sensors in the air.

A core tenant of aircraft carriers has been, of course, they sail hundreds of miles away undetected while unleashing massive airborne devastation.

The threat of constant tracking using new global sensor networks means in simple terms naval strategist have to expend far more effort to engage carriers safely.

Submarines and fast attack boats, beyond the reach of satellite technology and able to sail undetected more easily towards targets, become a more logical physical platform for launching airborne attacks from the sea. It’s the kind of thing both Iran and Ukraine have been proving grounds for lately.

Such developments mark a potential inversion of logic behind massive build-up of the U.S. Navy that stretch back to the 1980s. Allegedly a June 1977 dinner with Graham Claytor (Navy secretary under President Carter) led to a famous “Ocean Venture” exercise that reamins relevant even to this day.

Mr. Lehman notes, the U.S. fleet participated “in a sophisticated program of coordinated, calculated, forward aggressive exercises—all around the world.” The Soviets would thereby see that, with any aggressive move they made, “the might of the U.S. Navy would be off their coasts in a heartbeat.”

Over 250 ships and more than a dozen countries in 1981, then under the racist President Reagan, set out to demonstrate to Soviet leaders that a giant NATO alliance could achieve dominance of the sea such that Russia would not detect navies (aircraft carriers in particular) until it was too late.

That’s a premise China would like to believe they have finally shattered by following a long-coming trend of on-board image processing with inexpensive sensors in the air.

As a footnote on blustery news about advances in technology, if you dig into a “dominance” narrative of the U.S. Navy technology during the 1980s you’ll also find surveillance history gems.

For example, ships in the Norwegian fjords were trivial to spot visually when covered in snow (bright white) sitting in deep dark waters yet at the same time hard to detect with even the best radar of the day because sitting beneath mountains.

British aerial reconnaissance of German battleship Tirpitz, near Bogen in Narvik Fjord, Norway, 17 July 1942. Click to enlarge. Source: IWM

This was well known in WWII and of course still true when NATO ships closed in on Russia in the Norwegian waters decades later. Moreover, nobody had bothered to heat modern ship antennas so NATO sailors had to climb in frigid weather to remove ice with hand tools.

There’s technology… and then there’s ignoring history while deploying technology into a world of already experienced variables. That’s a huge hint about why China is probably wrong, given how bad AI tends to be when pressed into actual service.

The reality of big data security (e.g. vulnerabilities in AI due to trivial integrity flaws, made even worse by satellite platform limitations) is another way to look at this.

China is doing what should be expected. They follow easy and obvious trends in big data, moving analytics to the edge and improving the sensor resolution. Yet China (let alone Russia) isn’t particularly known for being able to handle adversarial creativity and the unexpected (possible perturbations to defy expectations).

Carriers may sail another day, in other words, just by returning to the lessons of a bold “Ace” Lyons deception maneuver — ignore academic theorists decrying the end of carriers in 1981 by flying a dozen jets 1,000 miles from the USS Eisenhower to surprise buzz adversaries right in the middle of their naval exercises.

[Soviets] were particularly taken aback by the prowess of our commanders at sea in cover and deception operations. To kill a ship you need to find it first, and our commanders stayed up nights thinking up ways to bluff, trick, hide, and conceal their forces at sea so that they couldn’t be found.

New Zealand Surveillance of Retailers Exposes Recycling Fraud

Trackers were affixed to expensive electronics to test retailer claims of recycling. A simple “fault” was created to see if anyone would take the five minutes needed to repair and resell a perfectly good device, but unfortunately they were sent to landfill instead.

“We now live in a world where nearly new appliances, and all the embedded resources contained in them and used to produce them can be thrown straight into a hole in the ground at the first sign of fault.” Smith said New Zealand was the only country in the OECD without a national e-waste scheme and its e-waste per capita rates was one of the highest in the world.

One would think the severe land constraints of islands would move the market naturally to make them leaders in landfill avoidance.

Unveiling Shadow AI: The Hidden Threat Within Organizations

In a scene straight out of a cyberpunk thriller, Shadow AI is infiltrating organizations stealthily, often without formal approval or oversight. These rogue AI initiatives range from unauthorized software and apps to secretive AI development projects. Imagine AI users becoming hidden robotic spies, quietly operating within the company.

Shadow AI is not just a sequel to Shadow IT—it’s a more formidable antagonist with greater potential for harm and a wider reach.

The true peril of Shadow AI lies in its ability to bypass governance, risk, and compliance (GRC) controls. Picture employees unknowingly feeding confidential information into ChatGPT, oblivious to the terms of service. This seemingly innocent act could violate the organization’s data protection commitments to clients. Even worse, if this sensitive data gets incorporated into future training sets, it might resurface unexpectedly, leaking confidential information through innocuous prompts.

The spread of Shadow AI is like a plot twist driven by the accessibility of generative AI tools. Unlike older technologies that required technical expertise, today’s generative AI only demands a knack for prompt engineering. This simplicity allows AI tools to proliferate across an organization, even infiltrating traditionally non-tech-savvy departments. With low costs and minimal technical barriers, these AI activities can evade management’s radar and slip past traditional control mechanisms.

Imagine a marketing team using Midjourney to create images for a new ad campaign. In the past, they would need a budget (requiring managerial approval) and technical setup (involving IT staff), alerting GRC functions and triggering appropriate workflows. Now, they can simply sign up online, pay a small fee, and start creating. This democratization is like the empowerment of rebel hackers in a sci-fi narrative, posing significant challenges for those tasked with protecting organizational assets.