The Most Militarily Decisive Use of Autonomy You Won’t See

enzozo/Shutterstock.com

Drones and robots get the headlines, but autonomous cyber weapons will be key to future warfare.

Armed drones and robot pack mules may get the headlines now, but far more powerful strategic effects will be achieved by artificial intelligence and machine-learning systems that select and attack targets autonomously—and fend off the enemy AIs trying to do the same.

As shown by the monograph “20YY: Preparing for War in the Robotic Age,” co-authored by now-Deputy Defense Secretary Robert Work, and the public discussion around Third Offset strategy, U.S. defense officials believe autonomy will change warfare in the air, sea, land and space domains. Yet, policymakers, acquisition professionals, and operators have yet to grapple fully with the implications of autonomy in cyberspace. The word does not even appear in the 2015 DOD Cyber Strategy, nor in the draft National Cyber Incident Response Plan of 2016.

Progress, however, is being made. In June, the Defense Science Board issued an important report that differentiated between “autonomy in motion,” such as robots and self-driving vehicles, and the less-well-known “autonomy at rest,”  including most autonomous cyber systems.

Just a few months later, the Defense Advanced Research Projects Agency turned a spotlight on cyber autonomy with its Cyber Grand Challenge at the 2016 DEF CON hacking convention in Las Vegas. Seven teams pitted algorithms against each other in a $2 million contest to autonomously interpret and fix brand-new code, and to patch vulnerabilities at machine speed without causing harm.

Discussions with the Cyber Grand Challenge competitors also revealed something else: that the concept of “counter autonomy” is becoming more important in cybersecurity. Artificial intelligence and machine learning can help select a target to be attacked autonomously, but the target machine also can learn from the attack mounted against it, and design its response accordingly.

This approach is even more powerful when large amounts of data, gathered from distributed, cooperative sensors across the network, can be analyzed quickly and applied to the common defense. In the words of the Defense Science Board report: “The best techniques not only carry out real-time cyber-defense, they also extract useful information about the attacks and generate signatures that help predict and defeat future attacks across the entire network.”

It will soon become essential to incorporate both autonomy and counter-autonomy in cybersecurity team training and systems design, and to make them independently testable by outside “sparring partners.” But today’s training, wargaming and testing regimens were not built to incorporate big data and “man-on-the-loop” approaches—and this gap accounts for much of the misunderstanding, and distrust, of cyber autonomy.  

DOD has many cyber ranges where these concepts might be examined, but it’s not clear how many are configured to handle the autonomy/counter-autonomy fights described above, or are instrumented to provide useful results. Most current war games do not include autonomy/counter-autonomy, and moving training from “man-in-the-loop” to “man-on-the-loop”—meaning a system that calls for human decisions only in rare and critical cases—is a further challenge.

The Defense Science Board offered a clear recommendation “to establish a counter-autonomous systems community to develop and test counter-autonomy technologies, surrogates and solutions,” which would draw from the way DOD handles advances and countermeasures in stealth technology today. However, DOD will be challenged to attract and retain people with the right skills for this community.

Protecting autonomous platforms (or autonomous subsystems) not part of large and wide-bandwidth networks requires different approaches: a focus on resilience (rebounding from an attack while maintaining as much mission performance as possible), rather than robustness (resistance to attack). An autonomous platform under attack could constantly check the integrity of its components, determine their importance and isolate or restore capabilities as required. 

Much work remains to be done, but some aspects of such a system already exist and DARPA has a program called CRASH, for Clean-Slate Design of Resilient, Adaptive, Secure Hosts, to create a more secure, segmented and resilient computing architecture to help systems defend themselves autonomously. Computer code mathematically proved to be secure, or “formally verified,” is another promising approach.

The Defense Science Board report also called for DARPA to push the envelope via a “stretch program” to demonstrate autonomous cyber-resilient systems for autonomous military vehicles and to incentivize improved capabilities through a series of increasingly rigorous competitions.

The game is worth the candle. At DEF CON, DARPA Director Arati Prabhakar reflected on the combination of AI and machine learning, plus a focus on security operations at the binary level and the “formal verification” of code, and said these offer ways to “imagine a future with some likelihood of cybersecurity.” The benefits of such a future would be enormous.