The Lethal Autonomous Weapons Pledge

18/07/2018

More than 2,400 people working on artificial intelligence in 170 organizations in 30 countries have signed a pledge saying that they won't develop robots capable of attacking humans.

They were joined by business leaders like OpenAI founder Elon Musk, DeepMind co-founders Mustafa Suleyman, Shane Legg and Demis Hassabis, leader of Google's AI division Jeff Dean, Skype co-founder Jaan Tallinn, and MIT professor Max Tegmark.

Called 'The Lethal Autonomous Weapons Pledge', it is meant to discourage military firms and countries from producing AI-enhanced killing machines. Those that signed declares that no machine should decide on its own whether to take a human life.

The pledge is organized by The Future of Life Institute (FLI), a Boston-based research organization that aims to mitigate existential risks facing humanity.

The pledge was published at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm.

Phalanx CIWS radar-guided Vulcan cannon
Lethal autonomous weapons have already been used. From tanks, sentries, drones, to this Phalanx CIWS radar-guided Vulcan cannon
Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.

In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable.

There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual. Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems.

Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security.

We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.

It started in April 2018 when a group of researchers called for a boycott on the Korea Advanced Institute of Science and Technology (KAIST), which was working with a defense company to implement AI on military weapons. The letter that opposed it, called the weapons as the "Pandora's box."

And the public pledge here, is to not make any "lethal autonomous weapons" amid the increasing concerns about how machine learning and AI will be used on the battlefields of the future.

The pledge also calls international governments, urging them to do more on regulating and restricting the use of autonomous killing machines, amid fears that countries will begin on an armed race that could get out of control and threaten global stability.

"AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way," said Tegmark.

Previously, Google has also come under fire for its Project Maven, in which the company has decided to not continue its development with the Pentagon.