In the blockbuster 1990 film "Total Recall," Arnold Schwarzenegger plays a construction worker who goes into Rekall Inc. and sits in a chair with a device that plugs into his brain. The device is meant to give him false memories of a vacation to Mars, which is much more affordable (and less time-consuming) than taking an actual trip off planet. The movie, based on a short story by science fiction writer Philip K. Dick, quickly devolves into a dark exploration of how such control of memories plays into a dystopian future in which no one is quite sure what's real and what isn't.
There's a rush to bring invasive devices to market that promise to reach sci-fi levels of reading our thoughts, memories, intentions and actions.
Dick wrote the short story in the 1960s, and the movie featured one of the first popular instances of what we now call a brain-computer interface, or BCI, in which a computer is able to read what's happening in your mind, or, more disturbingly, write information into your brain via electrical signals. While noninvasive BCIs that read basic brain wave information have been around for years in labs, today there's a rush to bring invasive devices to market that promise to reach sci-fi levels of reading your thoughts, memories, intentions and actions.
While the science to connect our brains to computers is slowly becoming a reality and the near-term applications will mostly benefit society, the dangers of dystopian applications are becoming more real, too. As a result, we urgently need to start grappling with the legal, ethical and moral issues.
The most recent significant unveiling of BCI technology came from Mars-bound SpaceX and Tesla CEO Elon Musk at the end of August. He gave a much-anticipated demo of the product of his startup Neuralink, which is building a device that will allow wireless communication between a brain and the internet through a surgical implant.
While there is no talk (yet) of writing memories using Neuralink, Musk said, without giving a specific time frame, that you will definitely be able to use it to "telepathically" summon your Tesla. Musk, no stranger to making bold predictions, believes using BCIs will soon allow you to stream music directly into your brain and, after that, download your memories onto silicon. Musk believes this could begin within 10 years, though many neuroscientists believe it will take much longer.
The end game, Musk has revealed, is to merge the human brain with artificial intelligence, making us smarter and more able to compete with superintelligent artificial intelligence, ultimately preventing an AI apocalypse. The demo itself was much less ambitious — a pig named Gertrude had the Neuralink device installed in its brain, which sent out signals in real time that could be seen on a tablet and heard as a series of beeps.
Neuralink is an invasive BCI, which surgically embeds tiny microwires (thinner than a human hair) with electrodes that detect signals from individual neurons. This kind of BCI Is known today as a neural lace, a term inspired by the science fiction books of Iain Banks, in which a future spacefaring civilization implants a neural lace in the young, leading to a more "intelligent" citizenry connected to and augmented by AI.
Unlike Neuralink, noninvasive BCIs have been around for a while in the form of biofeedback devices. These rely on EEG signals measured by commercially available headbands or nodes on the scalp in a lab.
Short-term applications of noninvasive BCIs are well underway, including using EEG headbands to help reduce stress and improve meditation and to control interfaces with computers and video games. Boston-based startup Neurable showed a demo of a person controlling objects in virtual reality in 2017 using a combination of brain signals and eye-tracking in a VR headset. In 2019, Facebook, after buying CTRL-Labs for more than half a billion dollars, demoed "brain typing" — typing a message simply by thinking about words and phrases.
In any competitive arena in which response time matters, a neural interface can provide an edge of several hundred milliseconds. A startup called Brink Bionics has created a glove that measures electrical signals along the nerves of the arm, giving a decisive edge in supercompetitive first-person shooter video games.
Recently, an AI pilot beat a human fighter pilot in a limited simulated dogfight. Of course, the human pilot experienced the delay of signals moving from brain to hands, while the AI had no such disadvantage. As depicted in the 1980s Cold War movie "Firefox," direct mind-to-fighter interface could provide such a disproportionate advantage to one side that the U.S. is willing to risk sending Clint Eastwood to the Soviet Union to steal the futuristic jet. (One snag is that once he finally gets into the plane, he realizes he has to "think" in Russian, a language he learned as a child.)
When it comes to invasive BCIs, those who are horrified at the thought of cutting open a healthy skull to implant wires and a chip should be happy to know that the first applications for Musk's Neuralink and similar technology will most likely be to help those with brain-related injuries and illnesses, from ALS to strokes to spinal cord injuries to muscular dystrophy.
This requires sharing your brain data so they can be merged with and compared to data from a large sample of others.
While most of these focus on reading intentions from the brain and then bypassing neural pathways that aren't working in order to move limbs and prosthetics, there's also the possibility of bypassing injured areas to send information directly into the brain, allowing the blind to see, for example. In general, these devices will require approval from the Food and Drug Administration to show that they are safe before they can be implanted in humans (though Musk said Neuralink recently got a special "breakthrough device" clearance from the FDA so it can move more quickly).
But as we look beyond these therapeutic or response-time applications to those that read and transmit thoughts and memories, a whole host of issues comes up. The real difficulty — and this is true for both invasive and noninvasive approaches — is to understand what specific electrical signals mean, whether it is a memory, a choice, an intention, a daydream, etc. This requires sharing your brain data so they can be merged with and compared to data from a large sample of others. The only way for your Tesla to know that you want to summon it is to match your brain signals with those it has previously stored.
While noninvasive BCIs have thus far been effective only in applications with small numbers of choices (pick A or B or C; move the mouse to the left or the right), invasive BCIs promise much more granular data from individual neurons (of which the brain has on the order of 80 billion) that could represent almost unlimited thoughts and options.
To categorize these patterns of neurons into a usable framework will be a more complex task than mapping the human genome, making it impossible without AI and machine learning acting on large databases of brain patterns from selected populations. And so AI will be involved in BCIs whether we intend it to or not. This raises issues around privacy and security, as well as problems inherent in AI and machine learning, such as racial and selection bias.
Going further, the ambition to use invasive BCIs to decode memories and "download" them onto servers isn't limited to Neuralink. If these efforts succeed, they could be incredibly beneficial, for instance by allowing people with Alzheimer's and their loved ones to relive experiences as memory loss becomes a thing of the past.
But decoding and storing memories raise a new set of ethical, moral and legal questions. For instance, who would own these memories after a person has died? Could the police obtain warrants to search through memories? Given that memory itself isn't completely reliable, could memories be used in lawsuits? How could we ensure that unscrupulous professionals don't sell or share them?
On the (even) darker side, if we decode memories and then are able to send in signals to play back or even tamper with them, we'll find ourselves in "Total Recall" territory. In that case, can people's memories ever be trusted going forward if there's the potential for them to be altered? How do we separate truth from fiction about the past when memories can be altered?
And we haven't even started talking about the potential for hacking.
Perhaps even more relevant today is that if big tech companies are allowed to integrate brain data with other personal data (such as one's profession, criminal record, etc.), AI will almost certainly be tasked with finding correlations between "thoughts" and behaviors — and we then move into a fully dystopian scenario.
Could the police obtain warrants to search through memories? Given that memory itself isn't completely reliable, could memories be used in lawsuits?
Steven Spielberg's "Minority Report," another film adapted from the mind of Philip K. Dick, depicts this troubling possibility. In this future, naturally telepathic humans, called "pre-cogs," are able to predict whether a person is going to commit a crime, which then triggers, through a neural connection, an alert to the pre-crime division, which rushes to arrest the suspect before he or she has broken the law. Technology (AI and a large amount of BCI data) could replace the supernatural pre-cogs, leading to a new kind of surveillance state and very troubling consequences for free and open societies.
We are, of course, still years (or decades) away from being able to realize many of these predictions. But the measuring, storing and analyzing of neural data that will result from brain-computer interfaces' hitting the market will start to happen soon. We need to think through these issues, providing guidelines and checks on potential abuses, before moving from today's narrow applications to more general purpose BCIs.
Otherwise we might just find ourselves in the middle of a Philip K. Dick novel after all.