Are you just a computer made of meat? Are all your thoughts, feelings and experiences nothing more than circuits made from neurons in your head?
If you’re like a lot of people, your answer to this question will be a definitive “No!” From science to philosophy, there are lots of good reasons to hold that human beings are more than just computing machines. Unfortunately, many of the technologists bringing versions of artificial intelligence to the market are already sure they know that we are.
Suddenly we might be leaving our grandmas and maybe even our kids with emotional robots because, oh well, everybody’s doing it.
For them people are, indeed, just biological computers. And, based on this flawed perspective, powerful new machines are being built and pushed out into the world right now. If we’re not careful, we may all find ourselves living in a world of unconscious machines destined to make us less human by boxing us into an existence that fits their algorithms.
A few years ago, I attended a lecture that scared the pants off me because it demonstrated this mindset at work. It was given by the head of a tech company promising a new era of “emotional computing,” but the phrase showed a fundamental blindness about what emotions, and our other rich interior responses to the world, really are.
The idea was simple: A computerized device would use its camera to map the instantaneous position of your facial muscles and, based on that map, predict your emotional state. The computer would then change its responses to you based on these predictions. The audience was told to imagine how wonderful it would be to leave our lonely grandmas with an emotional robot that could keep her company all day long.
Get the think newsletter.
We can ignore for a moment that the killer emotional computing app with real money-making potential isn’t for lonely grandmas, but one that lets advertisers fine-tune the next promotion they deliver to our eyeballs (and its surrounding expressive facial muscles). What really frightened me was the unconscious philosophy floating behind the idea of emotional computing, and behind a lot of artificial intelligence.
Whether they’ve thought about it deeply or not, many people working in tech subscribe to what philosophers call a “computational theory of mind.” The concept is that you are, basically, software running on your brain’s gooey neural hardware — or a computer made of meat. The world out there delivers sensory inputs to your eye, nose, ears, etc., and your brain processes those inputs into outputs that are your actions in the world. (Look, a tiger! Run!). Your internal experiences, like love or grief, are just the result of the brain’s information processing. They have no meaningful reality on their own. In the end, you really are nothing but your neurons.
But this is a philosophical position, not a scientific truth, and there’s good reason to think it’s wrong, or at least just a small part of the whole picture. For starters, it has no place for the vividness of our internal lives. The most important thing about being human is that we’re all the one and only subject of our own lives. To be conscious and human is to constantly be at the center of a richly felt world of experience.
That experience appears to us as a seamless whole that can’t be reduced to just this or that electrical input. Beautiful sunsets are not merely a bunch of colors fed into a light receptor. There has to be a subject and an experiencer for that beauty to be felt as something happening to someone.
But there is no real attempt to recreate this inner experiencer in the computational theory of mind, or explanation for how it would develop automatically. Instead, there’s just a hope that, once machines get complex enough, an interior life will simply pop up. But until that happens (if it ever does), all the artificial intelligence machines we build will be zombies, dead to themselves and lacking that essential world of inner experience that makes human beings conscious and alive.
And that’s what’s so frightening.
Since the rise of the internet, we’ve been rebuilding society with digital technologies, and they’ve done some truly amazing and wonderful things. But as these technologies have become more powerful (including early versions of AI built via Big Data), we’ve also seen their dark side. The bots of social media, the most obvious example, have distorted democracy in dangerous ways, and we still can’t foresee its long-term consequences.
Our world, and even the texture of our day-to-day lives, is being radically transformed by these machines. Already we rely on them to get us around via GPS, we talk to them via Alexa and Siri, and we are subtly influenced by their recommendations in venues as diverse as Netflix, Amazon and Facebook.
As more and more of our world — and our time — falls under the sway of AI, we will be increasingly influenced by its mechanisms, which is what makes approaching AI as though all it needs to convey is the processing part of brains and nothing else is so problematic.
Whatever the people at that emotional computing company can build will be blind to the rich experience of what an emotion really feels like. But by sending this technology out into the world, it might change the world just like social media did (how many people have you seen staring at their phone in just the last hour?). Suddenly we might be leaving our grandmas and maybe even our kids with emotional robots because, oh well, everybody’s doing it.
Beyond the specific example of emotional computing, the big problem with thinking of people as computers is that once you take that step, the only thing that matters is what can be computed and therefore predicted. Exactly how well can your last music purchase be linked to your political affiliation? Given your credit card use last week, exactly how likely is it that you’ll respond to an ad for a Disney cruise on Facebook?
This is how, if we’re not careful, the machines we built with an implicit and flawed idea of what it means to be human can end up trapping us in that faulty vision. Like our grandmothers getting stuck with an unconscious robot that only mimic emotions, we’re going to inadvertently build a society that impoverishes our humanity through these new technologies, and their flawed ideas about consciousness, in a thousand little ways.
If we’re not careful, the machines we built with an implicit and flawed idea of what it means to be human can end up trapping us in that faulty vision.
You don’t have to be anti-technology to see the threat. Advances in artificial intelligence and digital technology can do great things, such as help us solve climate change and health care. It also lets us instantly access every episode of “Star Trek” and Aretha Franklin’s whole discography.
But we don’t have to accept the bad to enjoy the good. As a society, we have the capacity and the right to reflect on, and even reject, some technologies before they get woven into the fabric of our lives. And the ability to make those kinds of choices will not come from computations but from what we value most, what we feel most deeply. That, after all, is what makes us truly human.