By Denise Chow, Shivani Khattar and Brock Stoneham
PALO ALTO, Calif. — Fans of the HBO series "Westworld" know that it doesn't shy away from big and often mind-bending themes. From questions about robot consciousness to the ethics of artificial intelligence, the show has taken some of the most compelling questions about technology and neuroscience and brought them into the realm of pop culture.
To learn more about the real-life science behind the fictional realm of Westworld — and whether one day we'll share our lives with sentient robots of the sort featured on the show — MACH recently sat down with Westworld's science adviser, Stanford University neuroscientist David Eagleman.
The interview, which took place at the headquarters of NeoSensory, a company Eagleman co-founded to commercialize "new sensory experiences," has been edited for clarity and brevity.
Warning: The interview contains spoilers from season two.
MACH: How did you become "Westworld's" science adviser?
Eagleman: I have spent a lot of my career thinking about issues of consciousness and how consciousness arises from the operations of the brain, and so I ended up meeting one of the writers and producers and we got into a conversation. That is how that started.
How close are we to a future as depicted in the show?
I think it is really, really distant for a couple of reasons. One is that AI can do very impressive things right now, like tell you if it is a picture of a dog versus a cat, or play chess or "Go" better than a human can. But these are extremely rarefied examples.
What AI is not any good at is the sort of broad intelligence that, for example, a 3-year-old has. A 3-year-old can do things like pick up a dish from the sink and put it in the dishwasher and communicate with people and manipulate people and navigate a complex room without falling down or running into the furniture — all kinds of things that AI really stinks at currently. We are not really close to having AI that seems like a human.
Number two is that building a stand-alone robot — it’s not clear the degree to which that is in anybody's future. Why? Software is so powerful and building a robot that is like a human is not really so practical, because building robots is really, really hard. You are constantly tending to the toilet of the robot, as the expression goes, meaning this wire breaks or that joint breaks, or whatever. If you are going to build a physical device, you might as well make something that is better than a human — that has wheels or other sorts of features.
Get the mach newsletter.
So I don't think we will have Westworld amusement parks. Instead, we will have amusement parks with sort of a “Star Wars” bar of various cool-looking robots that are not particularly intelligent. They aren't exactly like humans, but instead they do certain things much better than humans, and other things not well at all.
What is it about the human brain that makes it so hard to replicate with artificial intelligence?
We don't know all the issues going on in the brain. It is a mystery that has to be plumbed probably for many more decades or centuries before we understand the principles of brain operation. What we have happening in AI are these very useful networks like what are called deep learning networks or convolutional networks that can do clever sorts of things, but it is nothing like how the brain is actually operating.
There are almost a hundred billion neurons — those are the specialized cell types in the brain — and each one of those has about 10,000 connections to its neighbor. So there are almost a thousand trillion connections in the brain and we just haven't figured out all the secrets to it yet.
What would it mean for a robot to become sentient?
One of the big open questions in neuroscience is a question of, could you build something like a computer or any kind of mechanical device that becomes conscious? In other words, think of consciousness as the thing that flickers to life when you wake up in the morning. This awareness of what it is to be you. Could a machine ever have that?
There are a few ways to look at this problem. One issue is that we are just made up of pieces and parts. They are very sophisticated biological pieces and parts, but fundamentally each is just following the rules of physics and chemistry, and so everything is driving everything else and it's just a machine. In that way we think it might be possible to build a machine that is conscious, because we are the proof. The flip side of that is we just don't have any sense of how that would go. In other words, we don't have any theories that explain what consciousness is.
There is no way to take the current tools of science that we have and say, “OK, I can just make an equation. There is the taste of feta cheese or the smell of cinnamon or the redness of red or the pain of pain.” That kind of thing. We don't know how to phrase the private, subjective experience, and so right now we are just in a situation where there are no good theories explaining what consciousness actually is and how you could ever build a machine to get there.
A recent episode featured a special vest that you and your colleagues developed. What is it, and how does it work?
What we are doing here at NeoSensory is building new ways to pass information into the brain, and specifically it is via patterns of touch on the skin. The vest is covered with vibratory motors like the little buzzers in your cellphone, but lots of them, and the first thing that we are doing has to do with deafness. With somebody who is completely deaf, we can capture sound and pass in information that way. And deaf people can learn to hear the world, to understand what is going on by the feel on their skin. It is actually doing exactly what the inner ear does, which is to capture sound, break it up into different frequencies and send it off to the brain. It is just now happening on your torso.
We are pursuing several other projects. One of them has to do with being able to detect things at any distance, given patterns on your skin. We have done this with blind participants. It turns out that Google has LiDAR set up in their offices. LiDAR is the laser radar that you see on top of autonomous driving cars, and therefore in this particular office, they know the location of everything — where people are moving, where the furniture is, and so on. We tapped into that data stream and we could bring in blind people and they can feel what is happening around them. They can immediately intuit when they are feeling on their skin that somebody is off at a distance and then as the person gets closer and closer it is more intense, the vibration, and when somebody is moving around behind them they can get that that's what is happening in the world around them
What did you think about how it was used in the episode?
The idea is that the private military contractors who drop in are able to tell the location of the hosts [the robots on the show] by feeling it on their skin so that they would know, "Okay, there is a host off to the right one-third of a mile, and there is one behind me" and so on. The interesting part in this episode is that the humans, the private military contractors who wear this, get killed pretty rapidly. I was telling my friends that next time I want to put the vest on the host so that it lasts longer in the episode.
Has the show affected how you think about neuroscience?
A lot of the questions at the center of "Westworld" — about free will, about can a machine become conscious, and so on — these are really the most pressing questions of our time. And these are all questions that have been at the heart of neuroscience for a long time. When ideas get wrapped in fiction, it allows it to reach a much larger audience. There is something very powerful about storytelling, where one can tell a terrific story and carry millions of people and embed in there very deep, salient, important questions that are going to be critical to our future. And that is what we see in "Westworld."
One big issue raised this season whether preserving consciousness could be a form of immortality. Is that really possible?
If we can replicate human consciousness, if it turns out to be possible to download somebody's brain and reconstruct it on a different sub-strain, then in theory you could put it inside of a robot. One of the questions is about the ethics of doing that. I think we don't know enough yet to say too much about that. We are such a long way off from that actually being a possibility, but when that time arrives we will have to deal with this question of: What does it mean for this person to not die?
Probably some of the ethical issues will revolve around the issues of affordability, as in rich people can live forever and poor people can't. These are tough issues to wrestle with. It is already the case that rich people get all sorts of benefits and medical care of a quality that poor people do not, so the question is, can we make it in a way that would be affordable to everyone? There are going to be a whole host of issues that open up here, and this is still 50 or 100 or 200 years off. But at some point we are going to have to deal with this question.