Category Archives: Sailing

2018 AppSec California: “Unpoisoned Fruit: Seeding Trust into a Growing World of Algorithmic Warfare”

My latest presentation on securing big data was at the 2018 AppSec California conference:

When: Wednesday, January 31, 3:00pm – 3:50pm
Where: Santa Monica
Event Link: Unpoisoned Fruit: Seeding Trust into a Growing World of Algorithmic Warfare

Artificial Intelligence, or even just Machine Learning for those who prefer organic, is influencing nearly all aspects of modern digital life. Whether it be financial, health, education, energy, transit…emphasis on performance gains and cost reduction has driven the delegation of human tasks to non-human agents. Yet who in infosec today can prove agents worthy of trust? Unbridled technology advances, as we have repeatedly learned in history, bring very serious risks of accelerated and expanded humanitarian disasters. The infosec industry has been slow to address social inequalities and conflict that escalates on the technical platforms under their watch; we must stop those who would ply vulnerabilities in big data systems, those who strive for quick political (arguably non-humanitarian) power wins. It is in this context that algorithm security increasingly becomes synonymous with security professionals working to avert, or as necessary helping win, kinetic conflicts instigated by digital exploits. This presentation therefore takes the audience through technical details of defensive concepts in algorithmic warfare based on an illuminating history of international relations. It aims to show how and why to seed security now into big data technology rather than wait to unpoison its fruit.

Copy of presentation slides: UnpoisonedFruit_Export.pdf

Did a Spitfire Really Tip the Wing of V1?

Facebook has built a reputation for being notoriously insecure, taking payments from attackers with little to no concern for the safety of its users; but a pattern of neglect for information security is not exactly the issue when a finance guy in Sydney, Australia gives a shout-out to a Facebook user for what he calls an “amazing shot” in history:

As anyone hopefully can see, this is a fake image. Here are some immediate clues:

  1. Clarity. What photographic device in this timeframe would have such an aperture let alone resolution?
  2. Realism. The rocket exhaust, markings, ground detail…all too “clean” to be real. That exhaust in particular is an eyesore
  3. Positioning. Spitfire velocity and turbulence relative to V1 is questionable, so such a deep wing-over-wing overlap in steady formation is very unlikely
  4. Vantage point. Given positioning issue, photographer close position aft of Spitfire even less likely

That’s only a quick list to make a solid point this is a fabrication anyone should be able to discount at first glance. In short, when I see someone say they found an amazing story or image on Facebook there’s a very high chance it’s toxic content meant to deceive and harm, much in the same way tabloid stands in grocery stores used to operate. Entertainment and attacks should be treated as such, not as realism or useful reporting.

Now let’s dig a little deeper.

In 2013 an “IAF Veteran” posted a shot of a Spitfire tipping a V1.

This passes many of the obvious tests above. He also inserts concern about dangers of firing bullets and reliably blowing up a V1 in air, far away from civilians, versus sending it unpredictably to ground. Ignore that misleading analysis (shooting always remained the default) and revel instead in authentic combat photo quality of that time.

After this 2013 tweet several years then pass by, and nobody talks about V1 tipping, until only a few weeks ago a “Military aviation art” account posts a computer rendered image with a brief comment:

Part of a new work depicting the first tipping of a V-1 flying bomb with a wing tip. Who achieved this?

It is a shame this artist’s tweet wasn’t given proper and full credit by the Sydney finance guy, as it would have made far more sense to have a link to the artist talking about their “new work” or even to their art gallery and exact release dates:

Who achieved this? Who indeed? The artist actually answered their own question in their very next tweet, where they wrote…

On the bright side the artist answers their own question with some real history and a real photo, worth researching further. On the dark side the artist’s answer also sadly omits any link to original source or reference material, let alone the (attempted) realism found above in that “IAF veteran” tweet with an actual photograph.

The artist simply says it is based on a real event, and leaves out the actual photograph (perhaps to avoid acknowledging the blurry inspiration to their art) while including a high-resolution portrait photo of the pilot who achieved it.

Kind of misleading to have that high-resolution photograph of Ken Collier sitting on the ground, instead of one like the IAF Veteran tweeted… an actual photograph of a V1 being intercepted (e.g. Imperial War Museum CH16281).

The more complete details of this story not only are worth telling, they put the artist’s high-resolution fantasy reconstruction (of the original grainy blotchy image) into proper context.

Uncropped original has a border caption that clearly states it’s art, not a photo

Fortunately “V1 Flying Bomb Aces by Andrew Thomas” is also online and tells us through first-person accounts of a squadron diary what really happened (notice both original photographs are put together in this book, the plane and the pilot).

And for another example, here’s what a Vickers staged publicity photo of a Spitfire looked like from that period.

A Spitfire delivers beer to thirsty Allied troops

It shows the “Mod XXX Depth Charge” configuration (two 18 gallon barrels of bitter “beer bombs” to deliver into Normandy) and you can be sure an advertising/propaganda agency would have used the clearest resolution possible — the British don’t mess around with their beer technology.

Again notice the difference between air and ground photos, even when both are carefully planned and staged for maximum clarity.

Strong’s Brewery Barrels Locked and Loaded.

Back to the point here, V1 would be shot down in normal operations not “tipped”, as described below in a Popular Mechanics article about hundreds in 1944 being destroyed by the Tempest’s 22mm cannon configuration.

Popular Mechanics Feb 1945

Just to make it absolutely clear — Popular Mechanics’ details about cannons unfortunately doesn’t explain shooting versus tipping — here’s a log from Ace pilots who downed V1.

Excerpted from “V1 Flying Bomb Aces” by Andrew Thomas

So you can see how debris after explosions was a known risk to be avoided, even leading to gun modifications to hit with longer-ranges. It also characterizes tipping as so unusual and low frequency it would come mainly at an end of a run (e.g. with gun jammed).

Just for a quick aside for what followed soon after WWII ended, as I wrote on this blog last year, things inverted — shooting drones down in the 1950s was far more dangerous than tipping them because of increased firepower used (missiles).

Compared to shooting cannons at the V1, shooting missiles at drones was more like launching a bunch of V1s to hit a bigger V1, which ended as badly as it sounds (lots of collateral ground damage).

Again, the book “V1 Flying Bomb Aces” confirms specific ranges in the 1940s were used for shooting bombs so they exploded in air without causing harm, preferred against tipping.

Osprey Pub., Sep 17 2013.
ISBN 9781780962924

…the proper range to engage the V1 with guns was 200-250 yards.

Further out and the attacker would only damage the control surfaces, causing the V1 to crash and possibly cause civilian casualties upon impact.

Any closer and the explosion from hitting the V1’s warhead could damage or destroy the attacking aircraft.

Apparently the reason tipping worked at all was the poorly engineered Nazi technology had a gyro stabilizer for two dimensions only — flight control lacked roll movement.

V1 Flying Bomb Gyro. Source: MechTraveller

Tipping technically and scientifically really was a dangerous option, because physics would send the bomb out of control to explode on something unpredictable.

Back to the curious case of the artist rendering that started this blog post, it was a Spitfire pilot who found himself firing until out of ammo. He became frustrated without ammo so in a moment of urgency decided to tip a wing of the V1.

Only because he ran out of bullets, in a rare moment, did he decide to tip… of course later there would be others who used the desperate move, but the total number of V1 tipped this way reached barely the dozens, versus the thousands destroyed by gunfire.

Shooting the V1 always was preferred, as it would explode in air and kill far fewer than being tipped to explode on ground as also documented in detail by Meteor pilots, hoping to match the low-altitude high-speed of a V1.

Compared with the high performance piston-engined fighters then in service with the RAF (the Tempest V and Spitfire XIV), the Meteor offered little in the way of superior performance. Where it excelled, however, was at low level – exactly where the V1 operated. The Meteor I was faster than any of its contemporaries at such altitudes. This was just as well, for the V1 boasted an average speed of roughly 400mph between 1,000ft and 3,000ft. At those heights the Tempest V and Spitfire XIV could make 405mph and 396mph, respectively, using 150-octane fuel. The Meteor, on the other hand, had a top speed of 410mph at sea level. […] While the first V1 to be brought down by a Meteor was not shot down by cannon fire, the remaining 11 credited to No. 616 Sqn were, using the Meteor I’s quartet of nose-mounted 20mm cannons.

Note the book’s illustration of a V1 being shot at from above and behind. Osprey Publishing, Oct 23 2012. ISBN 9781849087063

Does a finance guy in Sydney feel accountable for claiming a real event instead of admitting to an artist’s fantasy image?

Of course not, because he has been responding to people that he thinks it still is a fine representation of a likely event (it isn’t) and he doesn’t measure any harm from confusion caused; he believes harm he has done still doesn’t justify him making a correction.

Was he wrong to misrepresent and should he delete his “amazing shot” tweet and replace with one that says amazing artwork or new rendering? Yes, it would be the sensible thing if he cares about history and accuracy, but the real question is centered around the economics of why he won’t change.

Despite being repeatedly made aware that he has become a source of misinformation, the cost of losing “likes” probably weighs heavier on him than the cost of having a low integrity profile. And as I said at the start of this post (and have warned since at least 2009 when I deleted my profile), the real lesson here is that Facebook loves low-integrity people.

Runaway Killer Military Drones of the 1940s

I recently met with members of state and federal government to discuss counter-drone solutions. When I brought up the 1940s drone issues they said “waaaaaht, I never” so here we go…

There was a 1935 Radioplane (RP-1) that evolved by 1939 into OQ-2 for use in the subsequent years. Over 15,000 of these types of UAV flew for the US Army Air Forces (USAAF) and others, all the way to 1952.

…training tool for anti-aircraft gunners. Such specialists were required to hit a moving target several miles up with cannon fire and it seemed an adequate thought that having a remote-piloted aircraft in training would make for more accurate gunners on the ground.

I’ll make a special side note here to mention that this concept of training anti-aircraft to hit moving targets was also the birth of artificial intelligence and “cyber”.

Cybernetics (coined from Greek kybernetes for “captain” of a ship or more literally someone who steers) was a book published in 1948 by Norbert Wiener.

It was based on his World War II experiments in anti-aircraft systems meant to anticipate planes by interpretation of radar images. And cybernetics/kybernetes should not be confused with kubernetes, which is Google’s highly popular open source software to steer swarms of drones.

By the end of 1940s there had been an overproduction of prop warplanes, which also led the US to convert its Hellcats into UAVs for target practice.

Artist rendering of Hellcat UAV color scheme

Unfortunately a 1956 incident known as “Battle of Palmdale” made this UAV concept somewhat unattractive for two reasons.

First, the US Navy lost radio connections and had to attempt to “shoot down pilotless F6F-5K Hellcat, minutes after it went out of control”.

And second, the worst part was not being a runaway military drone accidentally flying towards Los Angeles, but rather how “twin Scorpion interceptors fired more than 200 missiles, missing their target each time”.

Instead the missiles — each pod containing 52 Mighty Mouse 2.75-inch rockets — damaged property and set off a string of brush fires across northern Los Angeles County. The Hellcat drone finally crash-landed harmlessly in the Mojave Desert. Angry and frightened residents complained. Los Angeles County Supervisor Roger W. Jessup promised a detailed investigation and introduced a resolution urging the “utmost care” by Navy officials in sending the “robot planes skyward.”

Note how detailed these stories are of American military planes firing rockets into civilian areas.

Edna Carlson was at home there with her 6-year-old son when shrapnel exploded through her front window, bounced off the ceiling, pierced a wall and landed in a cupboard.

More fragments passed through the home and garage of J. R. Hingle, barely missing a visitor sitting on his couch.

Larry Kempton was driving west on Palmdale Boulevard with his mother, Bernice, when a rocket hit the street in front of the car. Fragments splintered the windshield, blew out a tire and put holes in the radiator. Neither person was injured.

Historians point out a detail that the double whammy here is the *ROCKETS themselves were DRONES* “supposed to disarm themselves” and didn’t function intelligently as intended either.

The Air Force reported that the missiles were designed to arm themselves when fired, but if they missed their target, they were supposed to disarm themselves when they flew below a certain speed.

In other words it’s understandable how some might want to forget that drone development in the 1940s ended with scorched earth, wildfires raging, firefighters scrambling and American civilians running for cover.

Headlines about drones in America essentially started as fun and inspiring “Guff wins 1st place at the 1938 Radio Control Nationals

And then two decades later “208 Rockets Fired at Runaway Plane: Missiles Spray Southland Area in Effort to Halt Wild Drone”, which set the stage for public nervousness through the 1960s about automated missiles (Cuba Crisis) let alone surveillance drones and driverless cars… all a reality at that time.

Could truck drivers lose their jobs to robots?

Next time you bang on a vending machine for a bottle that refuses to fall into your hands, ask yourself if restaurants soon will have only robots serving you meals.

Maybe it’s true there is no future for humans in service industries. Go ahead, list them all in your head. Maybe problems robots have with simple tasks like dropping a drink into your hands are the rare exceptions and the few successes will become the norm instead.

One can see why it’s tempting to warn humans not to plan on expertise in “simple” tasks like serving meals or tending a bar…take the smallest machine successes and extrapolate into great future theories of massive gains and no execution flaws or economics gone awry.

Just look at cleaning, sewing and cooking for examples of what will be, how entire fields have been completely automated with humans eliminated…oops, scratch that, I am receiving word from my urban neighbors they all seem to still have humans involved and providing some degree of advanced differentiation.

Maybe we should instead look at darling new startup Blue Apron, turning its back on automation, as it lures millions in investments to hire thousands of humans to generate food boxes. This is such a strange concept of progress and modernity to anyone familiar with TV dinners of the 1960s and the reasons they petered out.

Blue Apron’s meal kit service has had worker safety problems

Just me or is anyone else suddenly nostalgic for that idyllic future of food automation (everything containerized, nothing blended) as suggested in a 1968 movie called “2001”…we’re 16 years late now and I still get no straw for my fish container?

2001 prediction of food

I don’t even know what that box on the top right is supposed to represent. Maybe 2001 predicted chia seed health drinks.

Speaking of cleaning, sewing and cooking with robots…someone must ask at some point why much of automation has focused on archetypal roles for women in American culture. Could driverless tech be targeting the “soccer-mom” concept along similar lines; could it arguably “liberate” women from a service desired from patriarchal roles?

Hold that thought, because instead right now I hear more discussion about a threat from robots replacing men in the over-romanticized male-dominated group of long-haul truckers. (Protip: women are now fast joining this industry)

Whether measuring accidents, inspections or compliance issues, women drivers are outperforming males, according to Werner Enterprises Inc. Chief Operating Officer Derek Leathers. He expects women to make up about 10 percent of the freight hauler’s 9,000 drivers by year’s end. That’s almost twice the national average.

The question is whether American daily drivers, of which many are professionals in trucks, face machines making them completely redundant just like vending machines eliminating bartenders.

It is very, very tempting to peer inside any industry and make overarching forecasts of how jobs simply could be lost to robots. Driving a truck on the open roads, between straight lines, sounds so robotic already to those who don’t sit in the driver’s seat. Why has this not already been automated, is the question we should be answering rather than how soon will it happen.

Only at face value does driving present a bar so low (pun not intended) machines easily could take it over today. Otto of the 1980 movie “Airplane” fame comes to mind for everyone I’m sure, sitting ready to be, um, “inflated” and take over any truck anywhere to deliver delicious TV dinners.

Otto smokes a cig

Yet when scratching at barriers, maybe we find trucking is more complicated than this. Maybe there could be more to human processes, something really intelligent, than meets a non-industry specific robotic advocate’s eye?

Systems that have to learn, true robots of the future, need to understand a totality of environment they will operate within. And this begs the question of “knowledge” about all tasks being replaced, not simply the ones we know of from watching Hollywood interpretations of the job. A common mistake is to underestimate knowledge and predict its replacement with an incomplete checklist of tasks believed to point in the general direction of success.

Once the environmental underestimation mistake is made another mistake is to forecast cost improvements by acceleration of checklists towards a goal of immediate decision capabilities. We have seen this with bank ATMs, which actually cost a lot of money to build and maintain and never achieved replacement of teller decision-trees; even more security risks and fraud were introduced that required humans to develop checklists and perform menial tasks to maintain ATMs, which still haven’t achieved full capability. This arguably means new role creation is the outcome we should expect, mixed with modest or even slow decline of jobs (less than 10% over 10 years).

Automation struggles at eliminating humans completely because of the above two problems (need for common sense and foundations, need for immediate decision capabilities based on those foundations) and that’s before we even get to the need for memory and a need for feedback loops and strategic thinking. The latter two are essential for robots replacing human drivers. Translation to automation brings out nuances in knowledge that humans excel in as well as long-term thoughts both forwards and backwards.

Machines are supposed to move beyond limited data sets and be able to increase minimum viable objectives above human performance, yet this presupposes success at understanding context. Complex streets and dangerous traffic situations are a very high bar to achieve, so high they may never be reached without human principled oversight (e.g. ethics). Without deep knowledge of trucking in its most delicate moments the reality of driver replacement becomes augmentation at best. Unless the “driver” definition changes, goal posts are moved and expectations for machines are measured far below full human capability and environmental possibility, we remain a long way from replacement.

Take for example the amount of time it takes to figure out risk of killing someone in an urban street full of construction, school and loading zones. A human is not operating within a window 10 seconds from impact because they typically aim to identify risks far earlier, avoiding catastrophes born from leaving decisions to last-seconds.

I’m not simply talking about control of the vehicle, incidentally (no pun intended), I also mean decisions about insurance policies and whether to stay and wait for law enforcement to show up. Any driver with rich experience behind the wheel could tell you this and yet some automation advocates still haven’t figured that out, as they emphasize sub-second speed of their machines is all they need/want for making decisions, with no intention to obey human imposed laws (hit-and-run incidents increased more than 40% after Uber was introduced to London, causing 11 deaths and 5,000 injuries per year).

For those interested in history we’re revisiting many of the dilemmas posed the first time robotic idealism (automobiles) brought new threat models to our transit systems. Read a 10 Nov 1832 report on deaths caused by ride share services, for example.

The Inquest Jury found a verdict of man- slaughter against the driver,—a boy under fifteen years of age, and who appeared to have erred more from incapacity than evil design; and gave a deodand of 50/. against the horse and cabriolet, to mark their sense of the gross impropriety of the owner in having in- trusted the vehicle to so young and inexperienced a person.

1896 London Public CarriagesYoung and inexperienced is exactly what even the best “learning” machines are today. Sadly for most of 19th Century London authorities showed remarkably little interest in shared ride driving ability. Tests to protect the public from weak, incapacitated or illogical drivers of “public carriages” started only around 1896.

Finding balance between insider expertise based on experience and outsider novice learner views is the dialogue playing out behind the latest NHTSA automation scales meant to help regulate safety on our roads. People already are asking whether costs to develop systems that can go higher than “level three” (cede control under certain conditions and environments) autonomous vehicle are justified. That third level of automation is what typically is argued by outsiders to be the end of the road for truck drivers (as well as soccer moms).

The easy answer to the third level is no, it still appears to be years before we can SAFELY move above level three and remove humans in common environments (not least of all because hit-and-run murder economics heavily favoring driverless fleets). Cost reductions today through automation make far more sense at the lower ends of the scale where human driver augmentation brings sizable returns and far fewer chances of disaster or backlash. Real cost, human life error, escalates quickly when we push into a full range of even the basic skills necessary to be a safe driver in every environment or any street.

There also is a more complicated answer. By 2013 we saw Canadian trucks linking up in Alberta’s open road and using simple caravan techniques. Repeating methods known for thousands of years, driver fatigue and energy costs were significantly dropped though caravan theory. Like a camel watching the tail of one in front through a sandstorm…. In very limited private environments (e.g. competitions, ranches, mines, amusement parks) the cost of automation is less and the benefits realized early.

I say the answer is complicated because level three autonomous vehicle still must have a human at the controls to take over, and I mean always. The NHTSA has not yet provided any real guidance on what that means in reality. How quickly a human must take-over leaves a giant loophole in defining human presence. Could the driver be sleeping at the controls, watching a movie, or even reposing in the back-seat?

The Interstate system in America has some very long-haul segments with traffic flowing at similar speed with infrequent risk of sudden stop or obstruction. Tesla, in their typically dismissive-of-safety fashion despite (or maybe because of) their cars repeatedly failing and crashing, called major obstructions on highways a “UFO” frequency event.

Cruise control and lane-assist in pre-approved and externally monitored safe-zones in theory could allow drivers to sleep as they operate, significantly reducing travel times. This is a car automation model actually proposed in the 1950s by GM and RCA, predicted to replace drivers by 1974. What would the safe-zone look like? Perhaps one human taking over the responsibility by using technology to link others, like a service or delegation of decision authority, similar to air traffic control (ATC) for planes. Tesla is doing this privately, for those in the know.

Ideally if we care about freedom and privacy, let alone ethics, what we should be talking about for our future is a driver and a co-pilot taking seats in the front truck of a large truck caravan. Instead of six drivers for six trucks, for example, you could find two drivers “at the controls” for six trucks connected by automation technology. This is powerful augmentation for huge cost savings, without losing essential control of nuanced/expert decisions in myriad local environments.

This has three major benefits. First, it helps with the shortage of needed drivers, mentioned above being filled by women. Second it allows robot proponents to gather real-world data with safe open road operations. Third, it opens the possibility of job expansion and transitions for truckers to drone operations.

On the other end of the spectrum from boring unobstructed open roads, in terms of driverless risks, are the suburban and urban hubs (warehouses and loading docks) that manage complicated truck transactions. Real human brain power still is needed for picking up and delivering the final miles unless we re-architect the supply-chain. In a two-driver, six-truck scenario this means after arriving at a hub, trucks return to one driver one truck relationship, like airplanes reaching an airport. Those trucks lacking human drivers at the controls would sit idle in queue or…wait for it…be “remotely” controlled by the locally present human driver. The volume of trucks (read percentage “drones”) would increase significantly as number of drivers needed might actually decline only slightly.

Other situations still requiring human control tend to be bad weather or roads lacking clear lines and markings. Again this would simply mean humans at the controls of a lead vehicle in a caravan. Look at boats or planes again for comparison. Both have had autopilots far longer, at least for decades, and human oversight has yet to be cost-effectively eliminated.

Could autopilot be improved to avoid scenarios that lead to disaster, killing their human passengers? Absolutely. Will someone pay for autopilots to avoid any such scenarios? Hard to predict. For that question it seems planes are where we have the most data to review because we treat their failures (likely due to concentrated loss of life) with such care and concern.

There’s an old saw about Allied bombers of WWII being riddled with bullet holes yet still making it back to base. After much study the Air Force put together a presentation and told a crowded room that armor would be added to all the planes where concentrations of holes were found. A voice in back of the crowd was heard asking “but shouldn’t you put the armor where the holes aren’t? Where are the holes on planes that didn’t come back”.

It is time to focus our investments on collecting and understanding failures to improve driving algorithms of humans, by enhancing the role of drivers. The truck driver already sits on a massively complex array of automation (engines and networks) so adding more doesn’t equate to removing the human completely. Humans still are better at complex situations such as power loss or reversion to manual controls during failures. Automation can make both flat open straight lines into the sunset more enjoyable, as well as the blizzard and frozen surface, but only given no surprises.

Really we need to be talking about enhancing drivers, hauling more over longer distance with fewer interruptions. Beyond reduced fatigue and increased alertness with less strain, until systems move above level three automation the best-case use of automation is still augmentation.

Drivers could use machines for making ethical improvements to their complex logistics of delivery (less emissions, increased fuel efficiency, reduced strain on the environment). If we eliminate drivers in our haste to replace them, we could see fewer benefits and achieve only the lowest-forms of automation, the ones outsiders would be pleased with while those who know better roll their eyes with disappointment.

Or maybe Joe West & the Sinners put it best in their classic trucker tune “$2000 Navajo Rug

I’ve got my own chakra machine, darlin’,
made out of oil and steel.
And it gives me good karma,
when I’m there behind the wheel