IE 11 is not supported. For an optimal experience visit our site on another browser.

Killer robots ... friend or foe?

Thousands of robots are already on the battlefield in Iraq and Afghanistan, but what happens when you hand the robot a gun and turn it loose?

Some researchers fear that giving military robots autonomy as well as ammo is the first step toward a "Terminator"-style nightmare, while others suggest that in some scenarios, weapon-wielding robots could someday act more humanely than humans.

The pros and cons of killer robots are taking center stage Wednesday in London, at what's considered the world's oldest military think tank, the Royal United Services Institute.

On one side of the issue is Ronald Arkin, a robotics researcher at Georgia Tech who is working on a Pentagon-funded project to build a sense of ethics into battlefield robots - "an artificial conscience, if you will," he told me.

AP
Design engineer Gary Morin demonstrates

Foster-Miller's weaponized SWORDS robot.

"The basic rule is to try to engineer a system that will comply as best it can, given the information that it has, with the laws of war," Arkin explained. "And it's my belief that eventually we can do better than humans in this regard."

On the other side is Noel Sharkey, a robotics expert at Britain's University of Sheffield who served as chief judge for the long-running TV show "Robot Wars." 

Nowadays, Sharkey is sounding the alarm about the prospect of real-life robot wars: He's calling for an international ban on autonomous weapon systems until it can be shown that they can obey the laws of war.

"I think we should be addressing this immediately," Sharkey told me. "I think we've already stepped over the line."

Killer robots aren't on their own ... yet

That doesn't mean killer robots are on the loose. To date, the battlefield 'bots have been used as not-so-autonomous extensions of human warfighting capabilities. For example, the missile-armed Predator drones that have played such a prominent role in Iraq and Afghanistan are remote-controlled by teams of living, breathing pilots.

On the ground, robots have traditionally done reconnaissance or hunted for roadside bombs. Just recently, the Pentagon just went through a tangled procurement process to order up to 3,000 next-generation machines. (After a legal battle, the contract was won by iRobot, which also makes the Roomba vacuum cleaner and other robotic helpers.)

Last year, the Pentagon started sending gun-toting robots to Iraq, but even those robots aren't designed for autonomous operation. Instead, they're remote-controlled by human operators and are equipped with fail-safe systems that shut them down if they go haywire.

What worries Sharkey is that the military may be on a slippery slope leading to a robotic arms race. "My real concern is that the policies are going to make themselves, that the 'autonomization' of weapons will creep in piecemeal," he told me.

For example, Sharkey pointed out that the Pentagon is already on a path to make a third of its ground combat vehicles autonomous by 2015. "Then you'll put a weapon in one of them, and then it will gradually creep in bit by bit.," he said.

He also pointed to the Pentagon's roadmap for billions of dollars' worth of robotic research over the next 25 years. As the United States and its allies put more and more robots on the battlefield, their rivals will surely follow. "Once you build them, they're easy to copy," Sharkey said. "The trouble is that we can't really put the genie back in the bottle."

Even if the United States takes care to build robots with a "conscience," others may feel under no pressure to do likewise. A couple of years ago, Iranian-backed Hezbollah guerrillas sent a remote-controlled drone over Israel, and Sharkey said al-Qaida and other terrorists could follow suit with their own breeds of robo-bombers.

"If you don't really give a toss, you can just put an autonomous weapon running into a crowd anywhere," Sharkey said. "It's only a matter of time before that happens."

Killer robots with a conscience?

Arkin agrees with Sharkey that it's high time to start thinking about the implications of autonomous weapon systems.

"I think that's a reasonable debate, and there's good reason to have that debate at this time, just so we understand what we're creating," he said. "I would be content if it was decided that autonomous systems have to be banned from the battlefield completely."

But when it comes to designing the combat systems of the future, Arkin argued that there should be a place for autonomy, or at least an embedded sense of ethics. He pointed out that humans haven't always had a good track record on battlefield behavior.

"Human performance, unfortunately, is a relatively low bar," Arkin said.

One of Arkin's suggestions would apply even if a robot is under human control: The robot should be able to sense if something wasn't right about what it was being asked to do - and then require the human operator to override the robot's artificial conscience.

In other scenarios, the data flooding in about a potentially threatening encounter might be so overwhelming that mere mortals would not be able to process the input in time to make the right decision. "Ultimately, robots will have more sensors and better sensors than humans have to see the situation," Arkin said.

Arkin said he doesn't advocate the idea of creating robot armies to sweep over a battlefield. Rather, they would be used for targeted applications: For example, once an urban area is cleared of civilians, a robot could be set up to watch out for snipers and fire back autonomously, he said.

"The impact of the research I'm doing is, hopefully, going to save lives," he said.

But Arkin described his efforts as mere "baby steps" toward the creation of battlebots with a conscience. "There are no milestones or timetables for doing this right now," he said. "We're pioneering this work to see where it would lead."

New laws of robotics

This work goes way beyond science-fiction author Isaac Asimov's Three Laws of Robotics, which supposedly ruled out scenarios where robots could harm humans.

"Asimov contributed greatly in the sense that he put up a straw man to get the debate going on robotics," Arkin said. "But it's not a basis for morality. He created [the Three Laws] deliberately with gaps so you could have some interesting stories."

Even without the Three Laws, there's plenty in today's debate over battlefield robotics to keep novelists and philosophers busy: Is it immoral to wage robotic war on humans? How many civilian casualties are acceptable when a robot is doing the fighting? If a killer robot goes haywire, who (or what) goes before the war-crimes tribunal?

Sharkey said such questions should go before an international body that has the power to develop a treaty on autonomous weapons.

"In 1950, The New York Times was calling for a U.N. commission on robotic weapons," Sharkey said. "Here we are, 57 years later, and it's actually coming to pass - and we still haven't got it."

Update for 9:30 p.m. ET Feb. 26: I probably haven't done full justice to either Arkin's or Sharkey's point of view. For more about Arkin's work on robotic ethics, including a meaty technical report, check out his home page at Georgia Tech. For more about Sharkey's views, click on over to this article from Computer Magazine as well as his home page at the University of Sheffield.

Update for 6:30 p.m. ET Feb. 27: A sharp-eyed reader said that the picture of the robot I originally used on this item was not actually equipped with a gun. I've replaced that picture with a different one showing the right robot. Thanks for setting me straight, Remoteman!