DARPA announced this week a new research program called Assured Autonomy that is seeking to develop ways artificial intelligence can learn and evolve to cope with rapidly changing environments and improve the predictability of autonomous systems like driverless vehicles and drones.
“Tremendous advances have been made in the last decade in constructing autonomy systems, as evidenced by the proliferation of a variety of unmanned vehicles. These advances have been driven by innovations in several areas, including sensing and actuation, computing, control theory, design methods, and modeling and simulation,” said Sandeep Neema, program manager at DARPA.
“In spite of these advances, deployment and broader adoption of such systems in safety-critical DoD applications remains challenging and controversial.”
Last year the Defense Department released a report[PDF] on the current state and future of autonomy that focused on the need for autonomous systems to have a strong degree of trust.
The report argued that in order for autonomous systems to be trusted they must operate safely and predictably, especially in a military context. But operators should also be able to tell whether a system is operating reliably, and, if not, the system should be designed in such a way to allow appropriate action can be taken. This is the goal of Assured Autonomy.
“Historically, assurance has been approached through design processes following rigorous safety standards in development, and demonstrated compliance through system testing,” said Neema.
“However, these standards have been developed primarily for human-in-the-loop systems, and don’t extend to learning-enabled systems with advanced levels of autonomy. The assurance approaches today are predicated on the assumption that the systems, once deployed, do not learn and evolve.”
One approach to assurance of autonomous systems that has recently garnered attention, particularly in the context of self-driving vehicles, is based on the idea of “equivalent levels of safety,” i.e., the autonomous system must be at least as safe as a comparable human-in-the-loop system that it replaces. The approach compares known rates of safety incidents of manned systems—number of accidents per thousands of miles driven—and conducting physical trials to determine the corresponding incident rate for autonomous systems. Studies and analyses indicate, however, that assuring safety of autonomous systems in this manner alone is prohibitive, requiring millions of physical trials, perhaps spanning decades. Simulation techniques have been advanced to reduce the needed number of physical trials, but offer very little confidence, particularly with respect to low-probability, high-consequence events.
In contrast to prescriptive, process-oriented standards for safety and assurance, a goal-oriented approach, such as the one espoused by Neema, is arguably more suitable for systems that learn, evolve, and encounter operational variations. In the course of Assured Autonomy program, researchers will aim to develop tools that provide foundational evidence that a system can satisfy explicitly stated functional and safety goals, resulting in a measure of assurance that can also evolve with the system.