IE 11 is not supported. For an optimal experience visit our site on another browser.

The rise of smart machines puts spotlight on 'robot rights'

Are computers on their way to becoming people?
STAR WARS
Mark Hamill as Luke Skywalker confers with C-3PO in "Star Wars."Mary Evans / Ronald Grant Archive/Everett Collection

You probably wouldn’t have any qualms about switching off Apple’s virtual assistant, Siri — or Amazon’s Alexa or Microsoft’s Cortana. Such entities emulate a human assistant but plainly aren't human. We sense that beneath the sophisticated software, there’s “nobody home.”

But artificial intelligence is progressing swiftly. In the not-too-distant future we may begin to feel that our machines have something akin to thoughts and feelings, even though they’re made of metal and plastic rather than flesh and blood. When that happens, how we treat our machines will matter; philosophers and scholars are already imagining a time when robots and intelligent machines may deserve — and be accorded — some sort of rights.

These wouldn’t necessarily be human rights. But “if you’ve got a computer or a robot that’s autonomous and self-aware, I think it would be very hard to say it's not a person,” says Kristin Andrews, a philosopher at York University in Toronto, Canada.

Image: PORTUGAL-TECHNOLOGY-WEBSUMMIT
Humanoid robot Sophia from Hanson Robotics answers questions at the 2017 Web Summit in Lisbon, Portugal. Patricia De Melo Moreira / AFP - Getty Images

Which raises a host of difficult questions. How should we treat a robot that has some degree of consciousness? What if we’re convinced that an AI program has the capacity to suffer emotionally, or to feel pain? Would shutting it off be tantamount to murder?

Robots vs. apes

An obvious comparison is to the animal rights movement. Animal rights advocates have been pushing for a reassessment of the legal status of certain animals, especially the great apes. Organizations like the Coral Springs, Florida-based Nonhuman Rights Project believe that chimpanzees, gorillas, and orangutans deserve to be treated as autonomous persons, rather than mere property.

Steven Wise, who leads the organization’s legal team, says that the same logic applies to any autonomous entity, living or not. If one day we have sentient robots, he says, “we should have the same sort of moral and legal responsibilities toward them that we’re in the process of developing with respect to nonhuman animals.”

Of course, deciding which machines deserve moral consideration will be tricky, because we often project human thoughts and feelings onto inanimate entities — and so end up sympathizing with entities that have no thoughts or feelings at all.

Consider Spot, a doglike robot developed by Boston Dynamics. Earlier this year, the Waltham, Massachusetts-based company released a video showing employees kicking the four-legged machine. The idea was to show off Spot’s remarkable balance. But some people saw it as akin to animal cruelty. People for the Ethical Treatment of Animals (PETA), for example, issued a statement describing Spot’s treatment as “inappropriate.”

Kate Darling, a researcher at the MIT Media Lab in Cambridge, Massachusetts, observed something similar when she studied how people interact with Pleo, a toy dinosaur robot. Pleo doesn’t look lifelike — it’s obviously a toy. But it’s programmed to act and speak in ways that suggest not only a form of intelligence but also the ability to experience suffering. If you hold Pleo upside-down, it will whimper and tell you to stop.

CeBIT 2011
Visitors stroke a Pleo robotic dinosaur at the CeBIT technology trade fair in Hanover in 2011.Sean Gallup / Getty Images file

In an effort to see just how far we might go in extending compassion to simple robots, Darling encouraged participants at a recent workshop to play with Pleo — and then asked them to destroy it. Almost all refused. “People are primed, subconsciously, to treat robots like living things, even though on a conscious level, on a rational level, we totally understand that they’re not real,” Darling says.

While neither Pleo nor Spot can feel pain, Darling believes it’s worth paying attention to how we treat these entities. “If it is disturbing to us to behave violently towards them — if there’s something that feels wrong about it — maybe that’s a piece of our empathy that we don’t want to turn off, because it could influence how we treat other living things,” she says. (This is a key question raised by the TV series Westworld, in which guests at a theme park are encouraged to treat ultra-lifelike humanoid robots however they please.)

Conversing with robots

For now, mistreating Pleo or any other existing robot is no crime — as long as you’re the owner. But what about mistreating a bot that we believed really had some form of consciousness? And how would we be able to tell if a machine has a mind in the first place?

Computer science pioneer Alan Turing pondered this question half a century ago. The way Turing saw it, we can never know for sure what a machine is feeling or experiencing — so our best bet is simply to see if we can carry on a conversation with it just as if it were human (what we now call the Turing test).

Image: sophia
Jimmy Fallon talks with Sophia on "The Tonight Show."NBC

Given the complexity of human conversation, building a machine capable of engaging in lengthy verbal exchanges is a daunting task. But if we could build such a machine, Turing argued, we ought to treat it as though it’s a thinking, feeling being.

Mark Goldfeder, an Atlanta-based rabbi and law professor, has reached a similar conclusion: If an entity acts human, he wrote recently, “I cannot start poking it to see if it bleeds. I have a responsibility to treat all that seem human as humans, and it is better to err on the side of caution from an ethical perspective.”

The obvious conclusion is that rights ought to be accorded not on the basis of biology but on something even more fundamental: personhood.

What rights?

If we wind up recognizing some intelligent machine as a person, which legal rights would we be obliged to bestow on it? If it could pass the Turing test, we might feel it would deserve at least the right to continued existence. But Robert Sparrow, a philosopher at Monash University in Melbourne, Australia, thinks that’s just the beginning. What happens, he wonders, if a machine's “mind” is even greater than a human's? In a piece that appeared recently on TheCritique.com, he writes: “Indeed, not only would it be just as wrong to kill a machine that could pass the Turing test as to kill an adult human being, but, depending on the capacities of the machine, it might even be more wrong.”

Maybe that makes sense from the perspective of pure logic. But Ryan Calo, an expert in robotics and cyber law at the University of Washington in Seattle, says our laws are unlikely to bend that far. “Our legal system reflects our basic biology,” he says. If we one day invent some sort of artificial person, “it would break everything about the law, as we understand it today.”

For Andrews, the key issue is the entity’s right to have its own interests recognized. Of course, it may be tricky determining what those interests are — just as it can be hard for people from one culture to understand the desires of people from another. But when we recognize something as a person, we’re obligated to at least try to do the right thing, she says. “If we realize that something is actually a ‘someone,’ then we have to take their interests into account.”

And perhaps it’s not so far-fetched to imagine that those interests might include continued existence — in which case we might want to think twice before reaching for the off button.

FOLLOW NBC NEWS MACH ON TWITTER, FACEBOOK, AND INSTAGRAM.