Jody Williams is on a mission to stop killer robots. The Nobel Peace Prize winner wants an international treaty forbidding machines that can target and kill human beings without requiring a person to pull the trigger.
This week in Geneva, she is part of a group meeting with United Nations delegates who are trying to answer the question, "Do nations regulate killer robots when they arrive or ban them before they can do any damage?"
Sound like science fiction? Drones like those used by the United States already have the capability to shoot Hellfire missiles from the sky. But they are still controlled by a soldier in front of a screen. It's very possible that humans could be removed from that equation, a prospect that worries Williams, who jointly won the Nobel Peace Prize in 1997 for leading the International Campaign to Ban Landmines.
"I grew up during the Cold War," Williams told NBC News from Geneva. "The only thing I wanted my family to have was a bomb shelter, just in case the Soviet Union attacked us."
"I find the very idea of killer robots more terrifying than nukes. Where is humanity going if some people think it's OK to cede the power of life and death of humans over to a machine?"
Williams hopes that she and the Campaign to Stop Killer Robots, which she helped found in 2013, can persuade members of the U.N. to sign an international treaty banning the use of lethal autonomous weapons.
Right now, the U.N. is simply laying the groundwork for more official proceedings, which could eventually lead to a ban. It's not clear, however, if technology will wait for lawmakers to catch up.
Beyond science fiction
There are more than 80 countries with military robotics programs, according to P. W. Singer, senior fellow at the New America Foundation and author of "Wired for War: The Robotics Revolution and Conflict in the 21st Century." The White House recently announced that the U.S. would sell drones to allied nations.
So while "Terminator"-style robots aren't wandering around battlefields yet, that doesn't mean countries aren't trying to develop them.
Samsung Techwin's SGR-A1 robots in South Korea have the ability to autonomously fire on people walking across the DMZ. The Navy's X-47B stealth drones have not been programmed to target humans, but they're smart enough to autonomously take off and land on the deck of an aircraft carrier. Israel Aerospace Industries' Harpy drones can hunt down and crash into missiles without human intervention.
There are even cars that have taken cross-country road trips without someone behind the wheel. The momentum, in the private and military sectors, seems to be toward autonomy.
Experts interviewed for this story were hesitant to put an exact number on when we might see killer robots on the battlefield. Many people believe, however, that it could be very soon. On Monday in Geneva, computer scientist Stuart Russell made his own prediction.
Others agree that this is an issue that we could be dealing with in the near future.
"It's hard for me to imagine an Air Force officer being driven to Creech Air Force Base in Nevada in his Google self-driving car," Singer said, "and then getting out to use a remotely operated drone."
From landmines to killer robots
When Williams helped launch the International Campaign to Ban Landmines in 1992, it wasn't hard to find people who had been affected by them. A Human Rights Watch report from the following year estimated that at least one in 236 people in Cambodia had lost a limb to a landmine.
Hundreds of thousands of people around the world had been killed by them, including U.S. soldiers, who Williams represented while working for the Vietnam Veterans of America Foundation. (The U.S. is not one of the 162 countries who have joined the Ottawa Convention banning anti-personnel landmines).
Landmine explosions provide a lot of visceral stories to sway hearts and minds. Lethal autonomous weapons, on the other hand, have not killed anybody. Yet, in her opinion, they pose a much greater threat.
"A landmine sits there and if someone steps on it, it blows up," she said. "The landmine isn't out targeting and killing people."
So how do you convince people that something that doesn't exist yet is worth banning? She points to the civilian casualties incurred by drone strikes, including the 2013 incident in Yemen where witnesses say a drone killed 12 people who were attending a wedding party.
The Campaign to Stop Killer Robots — which includes members from 54 non-governmental organizations such as Pax Christi International and Amnesty International — claims that any technological safeguards won't be enough to guarantee a robot won't kill civilians, either because of technical errors or indiscriminate algorithms. That is provided the robots aren't used by terror groups or nation-states against their own citizens.
Then there is the question of who is legally liable when a drone kills somebody. Is it the programmer? The military commander in charge of its upkeep? The country that purchased it?
Not everybody thinks that banning lethal autonomous weapons is the answer.
"There are very serious dangers to the proliferation of this technology," Matthew Waxman, a professor at Columbia Law School, told NBC News. "I'm just not persuaded that a blanket prohibition is the right approach."
He believes that robots could potentially make warfare safer for civilians. Facial recognition software, advanced targeting systems, non-lethal projectiles and other technological advances could lead to fewer deaths, he said.
There are many programs, like missile defense systems, which already have a large degree of autonomy when it comes to targeting and firing. Defining what counts as a "lethal autonomous weapon" would be difficult, he said, and a blanket ban could end up stifling technologies that could save lives.
Williams, however, thinks the issue is much more black and white.
"We have nothing against robotics," she said. "We are against machines that — on their own — can target and kill human beings. That's a pretty clear line."
The road ahead
Like in the early days of the International Campaign to Ban Landmines, the Campaign to Stop Killer Robots is a hodgepodge of non-profit organizations with no shared, common budget.
Williams said the group needs to do "some serious fundraising" to put more pressure on countries to ban lethal autonomous weapons.
In Geneva, Elizabeth Quintana of the U.K. military think tank RUSI has already called a ban premature, echoing the opinion of the British government. Many countries are taking a wait-and-see approach.
Preemptive bans are relatively rare, but not unprecedented. Laser weapons designed to permanently blind people were added to the Convention on Certain Conventional Weapons (CCW) in 1995 despite the fact that they were never seen in combat.
Williams hopes that this week's meeting will lead to formal talks next year and eventually the addition of lethal autonomous weapons to the CCW.
"If we come out of this and don't see forward momentum, then we are going to have to rethink our strategy," she said. The landmine ban was created after the Canadian government, frustrated by the lack of progress by the U.N., invited countries to Ottawa to hammer out the text of an international treaty. Something similar, Williams said, could happen with killer robots.
For now, she bristles at the idea that lethal autonomous weapons will sooner or later show up in combat, even with the slow pace of politics and the much faster speed of technology.
"People keep saying that it's inevitable," she said. "Nothing is inevitable. It's only inevitable if you sit on your butt and don't take action to stop things you think are morally and ethically wrong."