Premium
This is an archive article published on June 13, 2004

My mother, the computer

Not long ago, a British poll found that three quarters of people have hit their computers in frustration. A German carmaker recalled an auto...

.

Not long ago, a British poll found that three quarters of people have hit their computers in frustration. A German carmaker recalled an automobile with a computerised female voice issuing navigation information8212;because many men refused to take directions from 8216;8216;a woman8217;8217;. A study found that people try to be nice to their own computers: they are more likely to report problems with the machine when asked about it while working on a different computer.

Psychologists, marketers and computer scientists are coming to realise that people respond to technology in intensely emotional ways. At a conscious level, people know their computers and cars are inanimate, but some part of the human brain seems to respond to machines as if they were human. 8216;8216;The way people interact with technology is the way they interact with each other,8217;8217; said Rosalind Picard, director of Affective Computing Research at the MIT Media Lab, during a recent lecture in Washington organised by the American Association for the Advancement of Science.

The tech world is slowly catching up to this insight as well. From automated voice systems that greet callers by saying, 8216;8216;Hi, this is Amtrak. I8217;m Julie!8217;8217; to sophisticated programmes which can register human emotions, applications of 8216;8216;affective computing8217;8217; are growing rapidly. Marketers see a gold mine in this research, which holds the promise of increasing sales in the same way that cheerful and helpful salespeople at a store are more likely to sell merchandise than are clerks who are surly.

At the same time, the work raises ethical questions. They range from whether it is deceitful to encourage people to interact with technology as if it were human to deeper concerns about what it would mean if computers could really form emotional 8216;8216;relationships8217;8217; with people.

Today, such concerns seem remote, because most technologies are almost deliberately anti-social8212;computers do not respond to emotional cues such as frustration, anger or anything else8212;and regularly act 8216;8216;inappropriately8217;8217;. What person, other than one of Arnold Schwarzenegger8217;s movie characters, would ever say, 8216;8216;You have performed an illegal operation8217;8217;?

In one familiar example, cited by Picard: You8217;re on deadline. A character barges in when you are very busy. It offers useless advice and does not notice when you get annoyed. It is impervious to hints. You tell the character to go away and, in response, it winks and dances.

Picard flashed a slide of the ubiquitous Microsoft Office Assistant, the paperclip icon with the sly smile8212;an example of a programme oblivious to a computer user8217;s emotions. Picard8217;s research has shown that as annoyance with a computer grows, people grip the mouse more tightly and tense up in their chairs. Other studies have found that large numbers of people have kicked their computers or hurled abuse at them.

Story continues below this ad

Scientists are responding in two ways to demands for 8216;8216;emotionally intelligent8217;8217; computing. The first involves designing ways for a computer to read a person8217;s emotions. Special sensors on seats can deduce from a person8217;s posture whether she is interested or bored. Other sensors measure heart rate to tell when someone is stressed; a camera can determine whether a brow is furrowed.

Through complex computer processing, explains Karen Liu, a graduate student in Picard8217;s lab, these signs are registered as signals of confusion or frustration. 8216;8216;In a way,8217;8217; Liu said, 8216;8216;we are giving machines eyes and ears.8217;8217; Other software can then respond appropriately.

At the MIT Media Lab, which studies how electronic information overlaps with the everyday world, robots are being programmed to help people recognise when they are stressed and to remind them to relax and avoid repetitive-strain injuries. Similar techniques are being used to enhance teaching software8212;by detecting when a student8217;s interest is flagging. The second, cruder approach involves encouraging people to believe that machines respond in social ways. The automated reservation systems used by Amtrak and many airlines fall into this category.

When done right, said Clifford Nass, a professor of communication at Stanford University and a pioneer in understanding the ways people relate to machines, users go along with the deception. Done wrong8212;when 8216;8216;Julie8217;8217; cannot respond to a simple question, for instance8212;people get even more frustrated than they would be with a machine that makes no pretense at being human. 8216;8216;It turns out if 8216;Julie8217; speaks in that machine-like speech, people hate it when it says, 8216;I,8217; 8217;8217; Nass said. 8216;8216;They think it8217;s clear you are not an 8216;I.8217; When it is recorded speech, people are more comfortable with the 8216;I8217;8212;up to the point it fails them.8217;8217;

Story continues below this ad

The second approach also plays on people8217;s vanity. People usually prefer a spellchecker programme that occasionally compliments them on getting a tough word right, Nass says. Matching a person8217;s personality with advertising messages might radically increase sales, Nass says. For instance, Amazon.com might sell more books if it found out whether a customer is an introvert or an extrovert by asking whether he prefers going to a party or reading a good book8212;and then tailoring descriptions of products accordingly. Introverts tend to like factual messages; they distrust flowery language. Extroverts are the opposite, Nass said.

8212; LAT-WP

 

Latest Comment
Post Comment
Read Comments
Advertisement
Advertisement
Advertisement
Advertisement