Why we really should ban autonomous weapons: a response

By Stuart Russell, Max Tegmark & Toby Walsh
August 10, 2015

President Richard Nixon (seen here during his historical meeting with Chinese leader Mao Zedong) argued that a ban on biological weapons would strengthen U.S. national security (credit: White House Photo Office)

We welcome Sam Wallace’s contribution to the discussion on a proposed ban on offensive autonomous weapons. This is a complex issue and there are interesting arguments on both sides that need to be weighed up carefully.

His article, written as a response to an open letter signed by over 2500 AI and robotics researchers, begins with the claim that such a ban is as “unrealistic as the broad relinquishment of nuclear weapons would have been at the height of the cold war.”

This argument misses the mark. First, the letter proposes not unilateral relinquishment but an arms control treaty. Second, nuclear weapons were successfully curtailed by a series of arms-control treaties during the cold war, without which we might not have been here to have this conversation.

After that, his article makes three main points:

1) Banning a weapons system is unlikely to succeed, so let’s not try.

(“It would be impossible to completely stop nations from secretly working on these technologies out of fear that other nations and non-state entities are doing the same.” “It’s not rational to assume that terrorists or a mentally ill lone wolf attacker would respect such an agreement.”)

2) An international arms control treaty would necessarily hurt U.S. national security.

3) Game theory argues against an arms control treaty.

Are all arms control treaties bad?

Note that his first two arguments apply to any weapons system, and could be used to re-title his article “The proposed ban on <insert type here> is unrealistic and dangerous.”

Argument (1) is particularly relevant to chemical and biological weapons, which are arguably (and contrary to Wallace’s claims) even more low-tech and easy to produce than autonomous weapons. Yet the world community has rather successfully banned biological weapons, space-based nuclear weapons, and blinding laser weapons, and even for arms such as chemical weapons, land mines, and cluster munitions, where bans have been breached or not universally ratified, severe stigmatization has limited their use. We wonder if Wallace supports those bans and, if so, why.

Wallace’s main argument for why autonomous weapons are different from chemical weapons rests on AI systems that “infiltrate and take over the command and control of their enemy.” But this misses the point of the open letter, which is not opposing cyberdefence systems or other defensive weapons. (The treaty under discussion at the UN deals with lethal weapons; a defensive autonomous weapon that targets robots is not lethal.)

Indeed, if one is worried about cyberwarfare, relying on autonomous weapons only makes things worse, since they are easier to hack than human soldiers.

One thing we do agree with Wallace on is that negotiating and implementing a ban will be hard. But as John F. Kennedy emphasized when announcing the Moon missions, hard things are worth attempting when success will greatly benefit the future of humanity.

National security

Regarding argument (2), we agree that all countries need to protect their national security, but we assert that this argues for rather than against an arms control treaty. When President Richard Nixon argued for a ban on biological weapons in 1969, he argued that this would strengthen U.S. national security, because U.S. biological warfare research created a model that other, less powerful, nations might easily emulate, to the eventual detriment of U.S. security.

Most of Wallace’s arguments for why a ban would hurt U.S. national security are attacking imaginary proposals that the open letter doesn’t make. For example, he gives many examples of why it’s important to have defensive systems (against hacking, incoming mortars, rockets, drones, robots that physically take control of our aircraft, etc), and warns of trying to “fight future flying robot tanks by using an equine cavalry defense,” but the letter proposes a ban only on offensive, not defensive weapons.

He argues that we can’t uninvent deep learning and other AI algorithms, but the thousands of AI and robotics signatories aren’t proposing to undo or restrict civilian AI research, merely to limit its military use. Moreover, we can’t uninvent molecular biology or nuclear physics, but we can still try to prevent their use for mass killing.

Wallace also gives some technically flawed arguments for why a ban would hurt U.S. national security. For example, his argument in the “deception” section evaporates when securely encrypted video streaming is used.

His concern that a military superpower such as the U.S. could be defeated by home-made, weaponized civilian drones is absurd, and consideration of such unfeasible scenarios is best confined to computer games. Yes, nations need to protect against major blows to their defensive assets, but home-made pizza drones can’t deliver that. Some advanced future military technology might, and preventing such developments is the purpose of the treaty we advocate.

Finally, Wallace argues that we shouldn’t work towards arms control agreements because people might “merge with machines” into cyborgs or “some time in the next few decades you might also have to get a consciously aware AI weapon to agree to the terms of the treaty” — let’s not let highly speculative future scenarios distract us from the challenge of stopping an arms race today!

Game theory

Wallace makes an argument based on game theory for why arms control treaties can only work if there’s another more powerful weapon left unregulated, that can be used as deterrence.

First of all, this argument is irrelevant since there’s currently no evidence that offensive autonomous weapons would undermine today’s nuclear deterrence.

Second, even if the argument were relevant, game theory beautifully explains why verifiable and enforceable arms control treaties can enhance the national security of all parties, by changing the incentive structure away from a destructive prisoner’s dilemma situation to a new equilibrium where cooperation is in everybody’s best interest.

What’s his plan?

What we view as the central weakness of Wallace’s article is that it never addresses the main argument of the open letter: that the end-point of an AI arms race will be disastrous for humanity. The open letter proposes a solution (attempting to stop the arms race with an arms control agreement), but he offers no alternative solution.

Instead, his proposed plan appears to be that all world military powers should develop offensive autonomous weapons as fast as possible. Yet he fails to follow through on his proposal and describe what endpoint he expects it to lead to. Indeed, he warns in his article that one way to prevent terrorism with cheap autonomous weapons is an extreme totalitarian state, but he never explains how his proposed plan will avoid such totalitarianism.

If every terrorist and every disgruntled individual can buy lethal autonomous drones for their pet assassination projects with the same ease that they can buy Kalashnikovs today, how is his proposed AI-militarization plan supposed to stop this? Is he proposing a separate military drone hovering over every city block 24 hours per day, ready to strike suspect citizens without human intervention?

Wallace never attempts to explain why a ban is supported by thousands of AI and robotics experts, by the ambassadors of Germany and Japan, by the International Committee of the Red Cross, by the editorial pages of the Financial Times, and indeed (for the time being) by the stated policy of the U.S. Department of Defense, other than with a dismissive remark about “kumbaya mentality.”

Anybody criticizing an arms-control proposal endorsed by such a diverse and serious-minded group needs to clearly explain what they are proposing instead.

Stuart Russell is a professor of computer science at UC Berkeley, and co-author of the standard textbook, Artificial Intelligence: a Modern Approach. Max Tegmark is a professor of physics at MIT and co-founder of the Future of Life Institute. Toby Walsh is a professor of AI at the University of New South Wales and NICTA, Australia, and president of the AI Access Foundation.