The scientists and engineers spearheading the creation of artificial beings and bionic people are responding to the magnetism of the technological imperative, the pull of a scientific problem as challenging as any imaginable.
Fascinating scientific puzzle though it is, the creation of artificial beings is also expected to meet important needs for society and individuals. Industrial robots are already widely used in factories and on assembly lines. Robots for hazardous duty, from dealing with terrorist threats to exploring hostile environments, including distant planets, are in place or on the drawing boards. Such duty could include military postings because there is a longstanding interest in self-guided battlefield mechanisms that reduce the exposure of human soldiers, and in artificially enhanced soldiers with increased combat effectiveness. (For this reason, the Department of Defense, largely through its research arm — the Defense Advanced Research Projects Agency — is the main U.S. funding source for research in artificial creatures.) Artificial creatures can also be used in less hostile environments: homes, classrooms, and hospitals and rest homes, serving as all-purpose household servants, helping to teach, and caring for the ill or elderly.
Among these possibilities, the connection between artificial creatures and human implants might be the most important because it promises enormous medical benefits. This connection might be the single greatest motivation to develop artificial beings. Yet regardless of their potential good uses, and apart from any issues of blasphemy, we have concerns about robots and androids. One fear is that the limitations we think to design out of our creations, from cosmetic deficiencies to the existential realities of illness and death, are essential human attributes, and that to abandon them is somehow to abandon our humanity. Something in us, it seems, fears perfection, and artificial beings threaten us with an unwelcome perfection, expressed as rigid unfeeling precision.
There is another menace first conveyed nearly 200 years ago in “Frankenstein,” and now more compelling than ever: the fear that technology will grow out of control and diminish humanity for all of us. That concern is hardly limited to artificial creatures. It appears in many arenas — the loss of privacy associated with new forms of surveillance and data manipulation; the depersonalization of human relationships; the incidence of human-made ecological disaster; the growing gap between the world’s technological “haves” and “have-nots.” It is especially and deeply unsettling, however, to contemplate the literal displacement of humanity by beings made in the human image, only better.
Although “Frankenstein” is the most famous story touching on many of these matters, it is not the only one. The depth of our reactions is shown in a whole imaginative narrative of artificial beings — a millennia-old fantasy or “virtual” history, in which these creatures are the focus of a panoply of emotions, hopes, and concerns. In one thread of the virtual history, humans develop strong feelings for inanimate or artificial beings, as in the Greek myth of Pygmalion, who yearns for his statue of a beautiful woman to come alive. That thread also appears in E.T.A. Hoffman’s nineteenth-century story “The Sandman,” where a young man falls in love with a clockwork automaton, and in the classic 1982 science-fiction film “Blade Runner,” where a special agent dedicated to the destruction of androids falls in love with one of them. In another thread in the virtual history, artificial beings yearn to become human or accepted as human, for example the “monster” in “Frankenstein,” the puppet Pinocchio, Commander Data in “Star Trek,” and the little boy android in the 2001 film “A.I.: Artificial Intelligence.”
In yet other stories, robots display intelligence and ethical standards that make them trusted guides to a better future for humanity, as in Isaac Asimov’s book “I, Robot,” but in a contrary thread, other equally able robots and androids slaughter people, as in Karel Capek’s play “R.U.R.” and the recent “Terminator” films. And even if artificial beings do not wish to wipe us from the earth, their superiority might still destroy us by stifling human creativity and independence, as in Jack Williamson’s story “With Folded Hands.”
No current artificial creatures can carry out these scenarios, nor are there yet bionic humans or cyborgs who are the physical or mental superiors of natural people. The abilities of robots and androids are still limited. If they behave intelligently, they do so only in specialized areas, or at a childlike rather than an adult level; though they might be mobile, they cannot yet independently navigate any arbitrary room or street; they are not conscious and self-aware, and hence are not moral beings as we understand morality; they are not emotional, and although they might elicit affection or an appreciation of cuteness as a living pet does, they evoke no deeper feelings.
They cannot pass for human in either appearance or behavior, at least not at the behavioral level proposed by the British mathematician, Alan Turing, in 1950. In what is now universally known as the Turing test, he proposed a purely verbal criterion for defining a “thinking machine” as intelligent. Imagine, he said, that a human observer can communicate with either the machine or another human without seeing either (for instance, via keyboard and printer), and can ask either any question. If after a reasonable time the observer cannot identify which of the two is the computer, the machine should be considered intelligent.
Some researchers now think the Turing test is not a definitive measure of machine intelligence. Yet it still carries weight, and now, for the first time in history, the means might be at hand to make beings that pass that test and others. Advances in a host of areas—digital electronics and computational technology, artificial intelligence (AI), nanotechnology, molecular biology, and materials science, among others — enable the creation of beings that act and look human. At corporations and academic institutions around the world, in government installations and on industrial assembly lines, artificial versions of every quality that would make a synthetic being seem alive or be alive — intelligent self-direction, mobility, sensory capability, natural appearance and behavior, emotional capacity, perhaps even consciousness — are operational or under serious consideration.
Not everyone engaged in these efforts is a robotics engineer or computer scientist. Researchers in other fields are working to help ill and injured people: Some of the most exciting efforts are in biomedical research laboratories, in hospitals and clinical settings, where physicians and engineers are developing artificial parts, such as retinal implants for the blind, that might eventually enhance human physical and mental functions. The medical applications and the engineering technologies enhance each other, and as they grow together, the potential for therapeutic uses brings significant motivation and a clear moral purpose to the science of artificial beings.
There is, however, considerable debate about the possibility of achieving the centerpiece of a complete artificial being, artificial intelligence arising from a humanly constructed brain that functions like a natural human one. Could such a creation operate intelligently in the real world? Could it be truly self-directed? And could it be consciously aware of its own internal state, as we are?
These deep questions might never be entirely settled. We hardly know ourselves if we are creatures of free will, and consciousness remains a complex phenomenon, remarkably resistant to scientific definition and analysis. One attraction of the study of artificial creatures is the light it focuses on us: To create artificial minds and bodies, we must first better understand ourselves.
While consciousness in a robot is intriguing to discuss, many researchers believe it is not a prerequisite for an effective artificial being. In his “Behavior-Based Robotics,” roboticist Ronald Arkin of the Georgia Institute of Technology argues that “consciousness may be overrated,” and notes that “most roboticists are more than happy to leave these debates on consciousness to those with more philosophical leanings.” For many applications, it is enough that the being seems alive or seems human, and irrelevant whether it feels so. Even our early explorations of artificial beings show us that the goal of seeming alive and human might be less challenging than we might expect because — for reasons only partly apparent — we tend to eagerly embrace artificial beings. As in the common reaction to Kismet or the robotic dogs, it takes only a few cues for us to meet creatures halfway, filling in gaps in their apparent naturalness from the well of our own humanity. In a way, an artificial being exists most fully not in itself, but in the psychic space that lies between us and it.
And yet ... there is the dream and the breathtaking possibility that humanity can actually develop the technology to create qualitatively new kinds of beings. These might take the form of fully artificial, yet fully living, intelligent, and conscious creatures — perhaps humanlike, perhaps not. Or they might take the form of a race of “new humans”; that is, bionic or cyborgian people who have been enormously augmented and extended physically, mentally, and emotionally.
New humans could also arise from a different thread in modern technology. Purely biological methods such as cloning, genetic engineering, and stem-cell research offer another way to enhance human well-being and change our very nature. While astonishing progress has been made in these areas, we have yet to see definitive, broad-scale results. Moreover, a program for changing humans at the genetic level has ethical and religious implications that trouble many people, and the consequences of human-induced changes propagating in our gene pool trouble many scientists. The creation of fully or partly artificial beings has its own set of moral issues; these, however, might ultimately prove more acceptable to society than those arising from genetic manipulation.
At its furthest reach, and as a great hope for the technology of artificial beings, we might be able to create a companion race — self-aware and self-sufficient, perhaps like us in some ways but different in others, with its own view of the universe and new ways to think about it. Fascination with the notion of communicating with another race of beings has been a main incentive in the search for intelligent life elsewhere in the universe — a hope that engages many people, as witness the great interest in the 1996 announcement that traces of ancient life were found on Mars. But that announcement was mistaken, and although the search continues (for instance, with the 2004 landing of two NASA robotic planetary explorers on Mars), we have yet to find evidence of alien beings anywhere that our spacecraft and telescopes can reach. Perhaps we never will, so the creation right here on Earth of a race that complements humanity has special appeal.
No matter what emerges from controversies about robotic consciousness or the morality of making artificial beings, no matter what approach to artificial intelligence proves effective, one thing is clear: Without digital electronics and digital computation, we could not begin to consider artificial intelligence and artificial sensory apparatus, the physical control of synthetic bodies, and the construction of interfaces between living and nonliving systems. Although the history of artificial beings has presented many ways to create them, animate them, and give them intelligence, now we are truly entering an era of digital people.
This has been excerpted from Digital People by Sidney Perkowitz © 2004 by the National Academy of Sciences, published by the Joseph Henry Press, an imprint of the National Academies Press. All rights reserved. This excerpt appears on this Web site under agreement with the National Academies Press.