U.N. officials, scientists and activists have gathered this week to debate a not-so-distant tomorrow filled with autonomous robot warriors -- but machines that can kill without human guidance are already here.
In fact, they are already being tested by militaries across the world.
This week's talks in Geneva are the beginning of a debate over whether autonomous lethal systems will eventually be banned under the Convention on Certain Conventional Weapons (CCW).
Some of the robots are just prototypes. Others, like sentry robots installed by South Korea and Israel, have the ability to kill autonomously but don't. As the technology gets more advanced, some officials are worried that stories about "Terminator"-style robots might slow the adoption of machines that could save the lives of their soldiers.
It's not just governments that are interested in the technology that's developed in the pursuit of warrior robots. In December, Google bought Boston Dynamics, a company famous for building an array of frighteningly realistic robots for the research arm of the Department of Defense.
'Fire and forget'
The Demilitarized Zone along South Korea's border is lined with mechanized sentinels. Each stationery robot looks kind of like a menacing security camera, but with the ability to lock its built-in machine gun onto a human target and shoot to kill.
The SGR-A1 robots, developed jointly by Samsung Techwin and Korea University, can automatically detect North Korean soldiers walking over the border and could technically fire without the help of a human.
That is not how they work in practice. Instead, once the SGR-A1 detects something, it alerts an operator who can then decide to pull the trigger. Why the middleman?
“They got a lot of bad press about having autonomous killer robots on their border,” Peter Asaro, the co-founder of the International Committee for Robot Arms Control, told NBC News from Geneva.
The SGR-A1 is not even that advanced by today's standards, Asaro said, comparing its sensors to the ones used by Microsoft's Kinect.
Other, more advanced robots are being tested right now. The U.S. Navy has successfully launched Northrop Grumman’s X-47B, a stealth drone the size of a fighter jet, from its aircraft carriers. In the U.K., Taranis, a top-secret unmanned aircraft named after the Celtic god of thunder, can travel at supersonic speeds and could be used by the British military to carry out pre-programmed attacks. (BAE Systems said the aircraft is meant to be used under the "control of a human operator").
The Harpy, described by Israel Aerospace Industries as a “fire and forget” weapon, is essentially a powerful missile with a brain, programmed to cruise until it detects emissions from a hostile radar system.
Current international law regulates all of these machines like they would any deadly weapon. It's only recently, however, that anybody had to worry about robots killing without a person making the order.
Deciding the future
When it comes to lethal autonomous systems, proponents argue that they could one day save lives, precisely targeting only opposing soldiers and machines while leaving civilians safe from harm.
"Too often, the phrase 'autonomous lethal system' appears still to evoke images of a humanoid machine," U.S. diplomat Stephen Townley said in the U.S. delegation's opening statement.
That "Terminator" imagery is "a far cry from what we should be focusing on," he said, adding that the "United States intends to discuss the risks of autonomy, as well as possible benefits, and means of analyzing those risks, over the coming days."
No formal rules will be announced when the talks end on Friday. But while politicians and academics debate whether autonomous killer machines should be used on the battlefield, defense companies are charging ahead at full speed, Asaro said.
"This discussion is certainly not slowing down development," he said. "It’s definitely changing the PR around the development, but if you go to any arms conventions, drones and autonomous weapons are the hot topic."