Not long ago, a British poll found that three quarters of people have hit their computers in frustration.
A German carmaker recalled an automobile with a computerized female voice issuing navigation information -- because many men refused to take directions from "a woman."
A study found that people try to be nice to their own computers: They are more likely to report problems with the machine when asked about it while working on a different computer.
Psychologists, marketers and computer scientists are coming to realize that people respond to technology in intensely emotional ways. At a conscious level, people know their computers and cars are inanimate, but some part of the human brain seems to respond to machines as if they were human.
"The way people interact with technology is the way they interact with each other," said Rosalind Picard, director of Affective Computing Research at the MIT Media Lab, during a recent lecture in Washington organized by the American Association for the Advancement of Science.
The tech world is slowly catching up to this insight as well. From automated voice systems that greet callers by saying, "Hi, this is Amtrak. I'm Julie!" to sophisticated programs that can register human emotions, applications of "affective computing" are growing rapidly.
Marketers see a gold mine in this research, which holds the promise of increasing sales in the same way that cheerful and helpful salespeople at a store are more likely to sell merchandise than are clerks who are surly.
Ethical questions raised
At the same time, the work raises troubling ethical questions. They range from whether it is deceitful to encourage people to interact with technology as if it were human to deeper concerns about what it would mean if computers could really form emotional "relationships" with people.
Today, such concerns seem remote, because most technologies are almost deliberately antisocial -- computers do not respond to emotional cues such as frustration, anger or anything else -- and regularly act "inappropriately." (What person, other than one of Arnold Schwarzenegger's movie characters, would ever say, "You have performed an illegal operation"?)
In one familiar example, cited by Picard: You're on deadline. A character barges in when you are very busy. It offers useless advice and does not notice when you get annoyed. It is impervious to hints. You explicitly tell the character to go away and, in response, it winks and dances a jig.
Picard flashed a slide of the ubiquitous Microsoft Office Assistant, the paperclip icon with the sly smile -- an example of a program oblivious to a computer user's emotions. Picard's research has shown that as annoyance with a computer grows, people grip the mouse more tightly and tense up in their chairs. Other studies have found that large numbers of people have kicked their computers or hurled abuse at them.
Responding to computers
Scientists are responding in two ways to demands for "emotionally intelligent" computing. The first involves designing ways for a computer to read a person's emotions. Special sensors on seats can deduce from a person's posture whether she is interested or bored. Other sensors measure heart rate to tell when someone is stressed; a camera can determine whether a brow is furrowed. Through complex computer processing, explained Karen Liu, a graduate student in Picard's lab, these signals are registered as signals of confusion or frustration.
"In a way," Liu said, "we are giving machines eyes and ears."
Other software can then respond appropriately. At the MIT Media Lab, which studies how electronic information overlaps with the everyday world, robots are being programmed to help people recognize when they are stressed and to remind them to relax and avoid repetitive-strain injuries.
Similar techniques are being used to enhance teaching software -- by detecting when a student's interest is flagging.
The second, cruder approach involves encouraging people to believe that machines respond in social ways. The automated reservation systems used by Amtrak and many airlines fall into this category. When done right, said Clifford Nass, a professor of communication at Stanford University and a pioneer in understanding the ways people relate to machines, users go along with the deception. Done wrong -- when "Julie" cannot respond to a simple question, for instance -- people get even more frustrated than they would be with a machine that makes no pretense at being human.
"It turns out if 'Julie' speaks in that machine-like speech, people hate it when it says, 'I,' " Nass said. "They think it's clear you are not an 'I.' When it is recorded speech, people are more comfortable with the 'I' -- up to the point it fails them."
The second approach also plays on people's vanity. People usually prefer a spellchecker program that occasionally compliments them on getting a tough word right, Nass said.
Preferences by personality
Matching a person's personality with advertising messages might radically increase sales, Nass said. For instance, Amazon.com might sell more books if it found out whether a customer is an introvert or an extrovert by asking whether he prefers going to a party or reading a good book -- and then tailoring descriptions of products accordingly. Introverts tend to like factual messages; they distrust flowery language. Extroverts are the opposite, Nass said.
The researcher said that software can help students learn better when a virtual "teacher" is accompanied by a virtual "student." That way, Nass said, the "teacher" can occasionally direct questions to the virtual student, and the real student does not feel picked on all the time. And the virtual student creates an illusion of a classroom setting, in which the real student can receive praise from both the "teacher" and a virtual peer.
Such techniques, Nass said, are really no different than the routine deceptions in human interactions. "We spend enormous amounts of time teaching children to deceive -- it's called being polite or social. The history of all advertising is about deceiving. In education, it's often important to deceive people -- sometimes you say, 'Boy you are really doing good,' not because you meant it but because you thought it would be helpful," Nass said.
"When I go into Nordstroms, I am treated fabulously. Do those people really like me?"