Just Say No To 'Killer Robots'

Laura Nolan
Ex-Google Engineer, Computer Scientist

People kill each other, and this can be traced back to thousands of years. When the fist was just not enough, humans started to realize that sharp objects were better than bare hands. So we created knives, swords and others capable of penetrating human flesh.

Fast forward, with the advent of gun powder, we realized that we can kill from a distance, without having to worry the target will have the chance to fight back. Firearms were then created, and in the modern world, there are just too many of them.

From the military to even civilians, weapons have long been the tools to either kill, hurt or threaten.

But for at least that long, humans have been the ones behind the hilt/trigger. It has been humans with flesh and bones who decide when or how to use the weapons, and not some so-called 'intelligent' computers.

Laura Nolan, an engineer and former Google employee, once worked on one of the company's most controversial projects: Project Maven, which aimed to put AI technology to power military drones.

"It was such a betrayal. We’re pretending to be a happy company that does lovely information organizing, and then you’re building several steps toward killer drones flying around."

Read: AI And Robotic Specialists Warned The United Nations About The Danger Of Autonomous Weapons

Laura Nolan
“As a site reliability engineer my expertise at Google was to ensure that our systems and infrastructures were kept running, and this is what I was supposed to help Maven with. Although I was not directly involved in speeding up the video footage recognition I realized that I was still part of the kill chain; that this would ultimately lead to more people being targeted and killed by the US military in places like Afghanistan.”

After leaving the company because of this project, Nolan joined the Campaign to Stop Killer Robots, and became an activist who warns the looming danger of AI-powered weapons.

Nolan predicted that autonomous weapons being developed pose a far greater risk to the human race than remote-controlled drones.

Robots may be capable of thinking by themselves, and operate in environments previously unknown to them. With AI, their minds have been trained and further trained to understand sequences and patterns, to react accordingly to situations.

Her concern is that AI robots, which are not directly controlled by human beings, aren't limited by any of the human attributes.

Replace living tissues with bits and bytes, machines lack that human leadership, courage, judgment, and discipline.

Humans can make mistakes. They are after all, humans. But computers can also make mistakes, and this is called computer 'bugs'. All software has bugs, and can have security holes. Put that into 'killer robots', it's essentially the recipe for disaster.

Related: The Lethal Autonomous Weapons Pledge

AI-powered robots have the potential to do "calamitous things that they were not originally programmed for".

“You could have a scenario where autonomous weapons that have been sent out to do a job confront unexpected radar signals in an area they are searching; there could be weather that was not factored into its software or they come across a group of armed men who appear to be insurgent enemies but in fact are out with guns hunting for food. The machine doesn’t have the discernment or common sense that the human touch has."

"There could be large-scale accidents because these things will start to behave in unexpected ways. Which is why any advanced weapons systems should be subject to meaningful human control, otherwise they have to be banned because they are far too unpredictable and dangerous."

"How does the killing machine out there on its own flying about distinguish between the 18-year-old combatant and the 18-year-old who is hunting for rabbits?"

"I am not saying that missile-guided systems or anti-missile defense systems should be banned. They are after all under full human control and someone is ultimately accountable. These autonomous weapons however are unethical as well as a technological step change in warfare."

Warfare is an arena characterized by deception, trickery, tactics and constant changes. Because AIs lack any human-understandable reasoning for their decisions, putting machines behind the trigger in a battle shouldn't be the best solution.

In June, Google announced that it would not renew its contract for Maven and released a set of AI principles laying out guidelines for the future of the technology - including its vow to not use the AI to create killer weapons.

Most of the employee activists, including Nolan, viewed the announcement as a win, but speaking at a Times conference, Google CEO Sundar Pichai played down the influence of the internal pressures.

"We don’t run the company by referendum," he said, explaining that he had listened to people actually working on building AI in making the decision. He stressed, however, that the company continued to do work with the military in areas like cybersecurity.