"I'm sorry, Dave. I'm afraid I can't do that."
Anyone who's seen "2001: A Space Odyssey" likely remembers the scene in which the HAL-9000 computer decides to go against orders and strand one of the main characters outside the spacecraft. But don't worry: This research project to teach robots when they should refuse to listen to humans won't end up like that. Probably.
Tufts University roboticists Gordon Briggs and Matthias Scheutz are studying how to make robots more intelligent in the actions they take, and when those actions should go against explicit instructions.
Should a robot walk forward even if it means falling off a cliff? Should it pick up an object even if it's red hot? Should it move its arm even if it might strike someone? And perhaps most difficult to determine, should it even listen to the orders it's being given in the first place?
"As the set of capabilities of robotic agents increase in general, so too will human expectations about the capabilities of individual robotic agents," reads Briggs and Scheutz's paper (PDF), "as well as the set of actions that robotic agents are capable of performing, but which situational context would deem inappropriate."
The two created a few videos showing off examples of how such interactions might look. A robot might, for instance, refuse to walk off the edge of the table, since it would fall. But upon being told that it would be caught, it accepts the order.
In another situation, the robot may refuse an order because the one giving the order has no authority to do so. Here is perhaps the beginning of a "2001"-type scenario, but it wouldn't do to have kids and strangers telling robots to go jump off cliffs, either.
Such intelligent responses are still some distance off, but with robotic vacuums and hotel attendants coming into vogue, and perhaps soon delivery drones, the way robots interact with humans grows more important by the year.