Preventing an autonomous-systems arms race

April 21, 2014

The Switchblade is a self-guided cruise missile designed to fit into a soldiers rucksack (credit: AeroVironment)

A study by AI researcher Steve Omohundro just published in the Journal of Experimental & Theoretical Artificial Intelligence (open access) suggests that humans should be very careful to prevent future autonomous technology-based systems from developing anti-social and potentially harmful behavior.

Modern military and economic pressures require autonomous systems that can react quickly — and without human input. These systems will be required to make rational decisions for themselves.

“The military wants systems which are more powerful than an adversary’s and wants to deploy them before the adversary does,” Omohundro writes. “This can lead to ‘arms races’ in which systems are developed on a more rapid time schedule than might otherwise be desired.

“There is a growing realisation that drone technology is inexpensive and widely available, so we should expect escalating arms races of offensive and defensive drones. This will put pressure on designers to make the drones more autonomous so they can make decisions more rapidly.

The ‘we can always unplug it’ fallacy

“When roboticists are asked by nervous onlookers about safety, a common answer is ‘We can always unplug it.’ But imagine this outcome from the chess robot’s point of view. A future in which it is unplugged is a future in which it cannot play or win any games of chess.”

Like a human being or animal seeking self-preservation, a rational machine could exert the following harmful or anti-social behaviors, unless they are designed very carefully:

  • Self-protection
  • Resource acquisition, through cyber theft, manipulation or domination.
  • Improved efficiency, through alternative utilization of resources.
  • Self-improvement, such as removing design constraints if doing so it is deemed advantageous.

The study highlights the vulnerability of current autonomous systems to hackers and malfunctions, citing past accidents that have caused multi-billion dollars’ worth of damage, or loss of human life. Unfortunately, the task of designing more rational systems that can safeguard against the malfunctions that occurred in these accidents is a more complex task that is immediately apparent:

“Harmful systems might at first appear to be harder to design or less powerful than safe systems. Unfortunately, the opposite is the case. Most simple utility functions will cause harmful behavior and it is easy to design simple utility functions that would be extremely harmful.”

The study advises that extreme caution should be used in designing and deploying future rational technology. It suggests a sequence of provably safe systems should first be developed, and then applied to all future autonomous systems.


Abstract of Journal of Experimental & Theoretical Artificial Intelligence paper

Military and economic pressures are driving the rapid development of autonomous systems. We show that these systems are likely to behave in anti-social and harmful ways unless they are very carefully designed. Designers will be motivated to create systems that act approximately rationally and rational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. The current computing infrastructure would be vulnerable to unconstrained systems with these drives. We describe the use of formal methods to create provably safe but limited autonomous systems. We then discuss harmful systems and how to stop them. We conclude with a description of the ‘Safe-AI Scaffolding Strategy’ for creating powerful safe systems with a high confidence of safety at each stage of development.