IE 11 is not supported. For an optimal experience visit our site on another browser.

Military Struggles to Find Limits of Robot Autonomy

WASHINGTON, D.C. – When science meets war, the capability of technology often outpaces the moral understanding of its use. Just as conflict forced previous generations to feel out the rule around the acceptable use of nuclear weapons or poisoned gas, advances in computer technology and a decade of drone combat has generated a debate among the military, the defense industry and policymakers regarding how much autonomy to grant battlefield robots.
/ Source: InnovationNewsDaily.com


WASHINGTON, D.C. – When science meets war, the capability of technology often outpaces the moral understanding of its use. Just as conflict forced previous generations to feel out the rule around the acceptable use of nuclear weapons or poisoned gas, advances in computer technology and a decade of drone combat has generated a debate among the military, the defense industry and policymakers regarding how much autonomy to grant battlefield robots.


Each party agrees that no computer should ever make the decision about pulling the trigger. However, they also agree that controlling every action a robot takes wastes military resources and dangerously limits the capability of these weapons. However, soldiers, bureaucrats and engineers strongly disagree about whether to err on the side of "killer robot" or " remote-controlled toy."

"The barrier isn't technical, it's doctrinal," said Ed Godere, a senior vice president and director of the robotics division at Qinetiq, a company that makes armed ground robots. "It may be more that they haven't figured out how to integrate it into how they fight."

The debate largely breaks along occupational lines. Think-tankers and policy officials envision drones with the most autonomy, with robots automatically driving convoys, flying supply missions and conducting reconnaissance. The scientists and companies that design and build these robots stake out a middle ground involving drones smart enough to follow orders from humans without micromanagement during a mission. Active military personnel generally remain the most skeptical of robot autonomy, and desire a robot only smart enough to alleviate mundane burdens such as basic vehicle navigation or hauling gear.

Autonomy is a necessary feature for all unmanned systems because in the future, a single operator may control multiple robots simultaneously. Without some degree of computer intelligence, no one person could concentrate on maneuvering an array of ground and airborne weapons at the same time.

"What I see in the future is convoys being lead by a manned vehicle, and autonomous vehicles following behind it. Also, autonomous air vehicles working in tandem with manned fighter planes," said Ryan Vander Ryk, a senior consultant at IHS Aerospace and Defense Consulting. "The Marines were experimenting with bringing in supplies with a UAV. It was piloted by someone on the ground, but they wanted to get to the level of just pushing a button and the robot does the rest."

However, Lt. Col. Nick Kioutas, an unmanned system acquisition officer with the U.S. Army, said his own experiences in Iraq and Afghanistan make that level of autonomy seem impractical. Instead of autonomy allowing soldiers to maximize their control of multiple robots, Kioutas thinks automation better serves military objectives by freeing up soldiers to concentrate on the jobs that robots cannot perform, such as using deadly force or coordinating with other humans.

"You want to have enough autonomy so someone can be in a vehicle, but not watch the road," Kioutas told InnovationNewsDaily. "You may have one operator working multiple UAVs, but more likely, you will have one guy doing less while working a single vehicle."

Manufacturers have been the most proactive in determining how much autonomy a robot can have and still complete its mission without endangering soldiers or civilians. To do so, companies such as Qinetiq engage in extensive testing, putting its robots through the ringer with soldiers and engineers alike, Godere told InnovationNewsDaily. Ultimately, they hope to find the correct amount of autonomy before a proactive robot ever reaches the battlefield.

Of course, if the tests themselves are not trustworthy, then it may take years of on the job training to determine how smart to make deadly robots.

"Testing of autonomy is very important," Kioutas said, "but we don't necessarily know how to do it."

Follow InnovationNewsDaily on Twitter @News_Innovation, or on Facebook.