Nearly every fighting ship in the U.S. Navy carries a Phalanx defense system, a computerized Gatling gun set on a six-ton mount that uses radar to spot targets flying out of the sky, or cruising across the ocean's surface. Once it "evaluates, tracks, engages and performs a kill assessment," a human gives the order to rattle off 4,500 rounds per minute.
This sort of "supervised" automation is not out of the ordinary. When Israel's "Iron Dome" radar spots incoming missiles, it can automatically fire a counter missile to intercept it. The German Air Force's Skyshield system can now also shoot down its targets with very little human interaction.
For years, "sniper detectors" have pointed telltale lasers at shooters who are firing on troops; DARPA is even working on a version that operates "night and day" from a moving military vehicle that's under fire. Meanwhile, sniper rifles themselves are getting smarter: In the case of the TrackingPoint precision guided firearm, the operator pulls the trigger, but the gun's built-in computer decides when the bullet flies.
"We are not in the 'Terminator' world and we may never reach there," says Peter Singer, author of "Wired for War" and director of the Center for 21st Century Security and Intelligence at the Brookings Institution. "But to say there isn't an ever increasing amount of autonomy to our systems — that's fiction."
Preparing for a future in which robots may be given a tad more independence, an international coalition of humans rights organizations including Human Rights Watch are banding together to propose a treaty ban on "killer robots."
The Campaign to Stop Killer Robots publicly launched April 23 with the goal of bringing the discussion about autonomous weapons systems to regular people, not just politicians and scientists. Also this month, the United Nations Special Rapporteur recommended a suspension of autonomous weapons — or "lethal autonomous robotics" — until their control and use is discussed in detail. But critics of those reports argue that it's too early to call for a ban because the technology in question does not yet exist. Others say this is the reason to start talking now.
"Our feeling is that [it is] morally and ethically wrong that these machines make killing decisions rather than humans [making] killing decisions," Stephen Goose, director of the arms division at the Human Rights Watch, told NBC News.
The group clarifies that it isn't anti-robot, or anti-autonomy — or even anti-drone. It's just that when a decision to kill is made in a combat situation, they want to ensure that decision will always be made by a human being.
Goose says the title of the new campaign is deliberately provocative and designed to catalyze conversation. He said, "If you have a campaign to stop 'Fully autonomous weapons,' you will fall asleep."
"The problem with modern robotics is there's no way a robot can discriminate between a civilian and a soldier," said Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield in the U.K. and an outspoken advocate for "robot arms control." "They can just about tell the difference between a human and a car."
But a treaty prohibition at this time is unnecessary and "might even be counterproductive," cautions Matthew Waxman, a national security and law expert at Columbia Law School. Waxman told NBC News that he anticipates a day when robots may be better than human beings at making important decisions, especially in delicate procedures like surgeries.
"In some of these contexts, we are going to decide not only is it appropriate for machines to operate autonomously, we may demand it, because we are trying to reduce human error," said Waxman.
Michael Schmitt, professor of international law and chairman of the U.S. Naval War College, told NBC News that a ban now, as a matter of law, is a "bad idea." When the Human Rights Watch wrote a 50-page report on the future of robotic warfare, Schmitt wrote a rebuttal in Harvard's National Security Journal. His main argument: "International humanitarian law's restrictions on the use of weapons ... are sufficiently robust to safeguard humanitarian values during the use of autonomous weapon systems."
Singer, whose work has made him an ombudsman in the growing debate over robotic warfare, says that now is the time to talk — now, when Google cars are guiding themselves through San Francisco's streets and algorithm-powered stock trading accounts crash markets based on keywords.
Singer thinks the debate needs to gain traction before governments and big companies become invested in the technology — and begin to influence the direction of policy. "People aren't pushing for more autonomy in these systems because it is cool. They're pushing for it because companies think they can make money out of it," he said.
Autonomous weapon systems that can operate independently are "not centuries away," Singer told NBC News. "We're more in the years and decades mode."
Nidhi Subbaraman writes about technology and science. Follow her on Twitter and Google+.