How to Speed Up Military Drone Innovation in America

German news captures 2022 sentiment that Russia is growing weaker by the drone

In a rather superficial analysis featured on War on the Rocks, the discourse on artificial intelligence (AI) reveals a surprising lack of depth. In essence, the crux of the argument suggests that by lowering expectations, particularly in terms of reliability, the concept of “innovation” is reduced to nothing more than pushing a colossal and conveniently uncomplicated “plug and pray” button.

The authors’ apparent reductionist perspective not only fails to grasp the intricacies of AI’s potential in the realm of warfare but also overlooks the nuanced challenges that seasoned military analysts, with decades of combat experience, understand are integral to the successful integration of advanced technologies on the battlefield.

America’s steadfast commitment to safety and security assumes that the United States has the three to five years to build said infrastructure and test and redesign AI-enabled systems. Should the need for these systems arise sooner, which seems increasingly likely, the strategy will need to be adjusted.

When considering America’s commitment to safety and security, a closer examination reveals a steadfast commitment inherently implies less reliance on assumptions. The authors, however, leave a significant void in their arguments by not adequately clarifying their position on this. The closest semblance of an alternative is their proposition of a vague aspirational path labeled as AI “assurance,” positioned between extremes of measured caution and imprudent haste.

…urgently channel leadership, resources, infrastructure, and personnel toward assuring these technologies.

A realist imperative however underscores the dynamic nature of the geopolitical landscape, necessitating a proactive stance rather than a reactive one. Three to five years ahead, is a tangible goal instead of shrinking release cycles to the imprudent “burn toast, scrape faster” mentality. The strategic imperative lies not merely in constructing a sophisticated AI apparatus but also in ensuring resilience and adaptability to the predictable exigencies of future conflict scenarios.

Here are a few instances of downrange events that unequivocally warrant the disqualification of AI innovations, a consideration surprisingly absent in the referenced article:

Source: My presentation on hunting robots, 2023 RSA SF Conference

This War on the Rocks article by a “native Russian speaker”, however, shamelessly bestows excessive praise on Russia for acceleration towards an ill-concieved “automated kill chain” characterized by total disregard for baseline assurances. In doing so, the authors fail to acknowledge the very pivotal point in drone engineering from the battlefield — oppressive Russian corruption and hollow patronage was left behind as Ukraine strongly asserted measured morality and quality control, which has been the true catalyst for Ukraine’s rapid and successful drone innovations (leaving the Russians always only in a clueless catch up mode).

Russia’s reckless pursuit and indiscriminate deployment of AI, as highlighted in the War on the Rocks article, contribute to the mounting evidence of Russian tanks and troops being grossly outmatched by adversaries who prioritize fundamental training and employ sophisticated countermeasures.

An overwhelming desire for switching into the “at any cost” haste of catch up mode lacking any morality is of little benefit when it brings about overwhelming technical debt and self-destructive consequences.

Remarkably, the authors neglected to provide an explanation for their omission of Ukrainian strides in “small, relatively inexpensive consumer and custom-built drones” as an integral aspect of American military strategy of effective targeting. Equally puzzling is their apparent belief that innovation ceases when others replicate it.

Taking a broader perspective, the American military ethos, characterized by augmentation for skilled professionals in tanks, has demonstrably outshone Russia’s reliance on over-automation guided by disposable conscripts stupidly killing themselves even faster than their enemy can. Despite Russia’s boastful rhetoric, their inability to distinguish between effective and ineffective strategies echoes historical patterns familiar to statisticians of World War II examining the Nazi lack of technological prowess.

AI, far from being an exception to historical trends, appears to be a recurrence of unfavorable chapters. Reflect on a crossbow, longbow, repeating rifle, or even Churchill’s “water” tanks (e.g. how America ended up mass-producing Britain’s innovations)… and the trajectory becomes evident. Throughout history, advancements in genuine measures of safety and security (weapon assurance as a practical measure of safety and security) have defined battlefields for centuries.

Abraham Lincoln famously urged the prudent use of time to sharpen an axe before felling a tree, a maxim applicable to any technology. The historical narrative strongly indicates that AI, as a technological frontier, will only serve to underscore the enduring wisdom encapsulated in the words of the President who delivered an unconditional victory in America’s Civil War.

“Ein Trommelfeuer der Desinformation”: German Investigation of Russian Influence Campaigns on X Twitter

Interesting phrasing can be found in Der Spiegel (original report in German)

Specialists from the Federal Foreign Office have identified a systematic Russian campaign on Elon Musk’s platform. In the federal government, concerns about electoral influence are growing.

The story is titled “carpet bombing of disinformation” (“trommelfeuer der desinformation“) referring to German psychological operations and artillery tactics of WWI to demoralize soldiers in trenches.

“Trommelfeuer aufs Trommelfell Der Erste Weltkrieg…” Source: BPB.de

A huge number of deceptive messages launched from Russia seemed to carry some common threads. The meticulously crafted military intelligence propaganda was tossed into the rising hot air of trending generic hashtags like #Oktoberfest or #Bundesliga, to shower influence over the widest possible audience.

The gravity of misinformation reached a zenith with a particularly impactful fraud targeting the personal account of Annalena Baerbock, Germany’s foreign minister. A special counterfeit text in her name being promoted by Elon Musk’s platform suggested that backing Ukraine was a threat to German prosperity. This deceptive Russian government campaign, however, embedded a Cyrillic anomaly that presented investigators with an obvious marker and inadvertently exposed Elon Musk’s growing links to deception.

Any investigator who delves into the labyrinth of deceit on the “Swastika” site (formerly known as Twitter), can see how an orchestrated effort seeks not only to distort public perception but also to undermine the integrity of critical political figures and any government’s stance on international affairs.

In the case of Elon Musk already promoting Nazi political candidates in Germany, a prevailing undertone emerged in this Russian-backed propaganda campaign as reported in Der Spiegel. Elon Musk’s platform generates the false narrative that the Olaf Scholz government is forsaking the welfare of Germans due to military aid and humanitarian assistance to Ukraine, compounded by welcoming over a million refugees fleeing Russian aggression.

Der Spiegel highlighted the conspicuous resemblance in the discourse surrounding the counterfeit posts and the narrative espoused by Elon Musk’s favored party, the Alternative for Germany (AfD).

“Mit seinen klar definierten Elementen (Logos, Farben, Schriften, Sprache…) und deren Zusammenspiel transportiert unser Erscheinungsbild Botschaften und Werte, die uns Einzigartigkeit verleihen und eindeutig von unseren Wettbewerbern unterscheiden.” Source: AfD

This political faction, considered the modern Nazi party, staunchly critiques the government’s stance on Ukraine. Notably, the AfD maintains established connections with the Kremlin and adopts a sympathetic posture towards Vladimir Putin. This alignment in rhetoric and shared sentiments underscores a potential nexus between the disseminated disinformation by Elon Musk and the political leanings of the AfD, raising obvious questions about the influence and motivations behind such orchestrated Russian messaging campaigns.

SC Tesla Kills One in “Veered” Crash Into a Tree

Another day, another “veered” Tesla straight into a tree.

Troopers responded at approximately 12:20 a.m. to Spanish Wells Road near Marshland Road where a 2016 Tesla sedan crashed, killing the driver, Lance Cpl. Nick Pye said.

Investigators say the vehicle was traveling south on Spanish Wells Road when the vehicle ran off the road to the right and struck a tree.

Source: Google Maps

NHTSA Recall of 200K Brand New Tesla Cars Cites “Software Instability”

UK authorities had to issue a warning that listening to impatient and immature “life hack” fraudsters can kill you, your family and everyone around you.

Over the years, Tesla’s short-sighted management culture, characterized by a grossly negligent mindset to dump product into the market as quickly as possible despite known flaws (essentially, “burn toast now, scrape later”), has resulted in some troubling instances. A glaring example is evident in the recent NHTSA recall case.

Tesla, Inc. (Tesla) is recalling certain 2023 Model S, X, and Y vehicles equipped with full self-driving computer 4.0 and running a software release version 2023.44.30 through 2023.44.30.6 or 2023.44.100. Software instability may prevent the rearview camera image from displaying.

In this safety regulator’s report it’s clear how a giddy “science fiction” dream of abruptly eliminating physical mirrors to use only software for visibility turned up as a predictably stupid and unsafe implementation.

The 2023 Tesla manufacturing processes, while aiming for a sleek design, pushed 200,000 vehicles onto public roads with unsafe visibility issues due to buggy software, essentially rendering drivers blind.

“Software instability”

That phrase is a HUGE dig by the NHTSA at counter-safety culture of Tesla. The car brand often bleats and gloats about rapidly throwing problems into software development hacks, ignoring regulations, instead of using traditional multi-modal automotive safety practices.

Any guess how Tesla will address this recall? With… wait for it… just more and more and more software instability.

One of the first to report problems with the update was 2023 Tesla Model Y Long Range owner Brandon Yang, who told us he has owned his electric SUV for less than four months. Yang was driving his Tesla when he noticed the car’s active safety and driving assists weren’t working, so he arranged with a Tesla service center to have his car updated. But when the vehicle tried to update itself to build 30.8, Yang’s car got stuck downloading the update and wouldn’t shut off, repeatedly cycling through attempts that wouldn’t complete.

This occurred over a period of more than 72 hours, during which the car gradually drained its battery. Yang eventually disabled Wi-Fi to break the cycle, but the car enabled an internal LTE antenna to continue trying (and failing) to update. After much back-and-forth with the service center, Yang was told his car needed a new Autopilot computer to fix the problem—though on pickup a week later, he was told a software script had fixed it after all

While trying to sort it all out, Yang posted to Reddit for advice, and was met with numerous other owners with the same issue. {…] Tesla however has shipped bad updates before, in one case recalling a Full Self-Driving Beta update that caused cars to slam on their brakes with alarming frequency.

The ballooning technical debt that is producing amateur-level script failures in 2023 Tesla cars should be no surprise. The more they depend on “rapid patch” fixes all the time and everywhere, instead of sound engineering principles and safety regulations, the more likely this all ends for them in financial and moral bankruptcy. Tesla is digging itself into a giant predictable engineering disaster, placing everyone in and around the car in danger.

In related news:

Tesla hacked, 24 zero-days demoed at Pwn2Own Automotive 2024

TWENTY FOUR ZERO-DAYS.