Breaking News Emails
Stephen Hawking warned it could "spell the end of the human race." In the Terminator movies, it results in the robot apocalypse.
Artificial intelligence has a friend, however, in "Chappie." The film from "District 9" director Neill Blomkamp looks at a world where robots hold the solution to our problems and humans are the villains — more specifically, Hugh Jackman, decked out in a ridiculous mullet and short khaki shorts.
"The moment we gave birth to AI, it would be a different planet," Blomkamp told NBC News.
Disease? Eradicated. Poverty? Humans could spend more time thinking and less time working to make ends meet.
Breaking News Emails
"You would have something that has 1,000 times the intelligence that we have, looking at the same problems that we look at," he said. "I think the level of benefit would be immeasurable."
What we mean when we talk about AI
It has been 60 years since computer scientist John McCarthy coined the term "artificial intelligence," which he imagined as "computer programs that can solve problems and achieve goals in the world as well as humans."
In the narrow sense, that has already happened. IBM's Watson proved it could play "Jeopardy!" as well as most contestants. In science fiction, artificial intelligence usually equates to a broader set of skills, the computer equivalent of a human brain.
"Self-awareness means you perceive yourself as being unique," Wolfgang Fink, a roboticist at the University of Arizona, told NBC News. "It's like the Latin saying 'cogito ergo sum' — I think, therefore I must be."
He isn't sure someone will ever build a robot that is self-aware, and if they do, he thinks it probably won't be for a very long time.
McCarthy died in 2011, the same year that Apple unveiled the iPhone 4S equipped with Siri. The gulf between Siri and the digital assistant in "Her" (another recent movie about AI) is vast.
Alan Winfield, an electrical engineer and researcher at the Bristol Robotics Laboratory, compared it to making the jump from spaceflight to faster-than-light spaceflight — in other words, it would require a massive technological breakthrough that might never come.
Still, that hasn't stopped people from fretting publicly about the risks of AI. Elon Musk said it could be "potentially more dangerous than nukes" and donated $10 million to a program aimed at making sure the technology benefits mankind.
The worry is that once robots are sentient, they could modify themselves, becoming smarter and smarter until they change the world in ways we can't imagine — a tipping point commonly referred to as "the singularity."
In one disastrous scenario, robots gain sentience and destroy humanity, a la "The Terminator." Another possible future involves office supplies. In a thought exercise popular with AI experts, super-intelligent robots programmed to make paper clips stop at nothing — including consuming all of the world's resources — to complete their task.
It's not like they're evil. It's just that computer programs can have unintended consequences.
"What we need to be afraid of is truly autonomous systems, where we can't understand how a system came up with a decision," Fink said. "At that point, you have basically lost control."
Why robots won't kill us
Bender from "Futurama" might want to "Kill all humans," but there are plenty of friendly robots in science fiction as well. Winfield is especially fond of Data from "Star Trek: The Next Generation" and the robots from Isaac Asimov's stories.
Like Asimov, Winfield spent a lot of time thinking about how robots should act, eventually helping to create several "principles of robotics" to guide companies towards making safer machines.
"I'm not doing it because of some nightmare scenario of robots taking over the world," he told NBC News. "I'm doing it because even commonplace robots need to be more than just safe. Good engineering requires that we build safety into our systems."
Robots that manufacture cars and disarm bombs need to be safe, or else nobody would buy them. It's likely that the advanced robots of the future would be built with the same level of caution, he said.
Even if robots become self-aware, it's not entirely clear why they would want to destroy humanity.
"If there is artificial intelligence, it's not governed by the same biological boundaries that humans or any other animals are governed by," Blomkamp said.
Robots don't need to impress mates or protect their young. To assume that they will "treat humans like humans treat the rest of the planet," he said, was jumping to conclusions.
The future of AI
So, why do we even want robots that can think for themselves? For one thing, they might be able to help solve our medical, social and environmental problems.
They might also help us explore the galaxy. It can take as long as 22 minutes to get a message from Earth to Mars. Imagine how long it would take for commands to reach a spaceship venturing outside of our solar system.
Robots that could analyze data, decide what is worth investigating and create a plan of action would make deep space exploration much more feasible. They could also work in other places that are too dangerous or remote for humans.
The benefits of AI are "potentially staggering," said Winfield, who hopes fears over an unlikely "Terminator" scenario don't forestall funding for artificial intelligence research.
"I think people are too pessimistic," he said. "It's right to be a little bit cautious, but not obsessively."
That opinion is shared by Hugh Jackman, who told reporters that he thought AI would "ultimately be used for good," and Blomkamp, who thinks irrational fear shouldn't stall technological progress.
"It’s not based on facts," Blomkamp said. "It’s based on an emotional gut reaction. You can’t relate to a computer."