News: News Archives
Robots with "feelings" augur a new era in relations between humans and machines
Judging by conventional standards, few would call Kismet handsome. It has two round blue eyes, each with an eyebrow, two floppy ears and two rubbery red lips, all powered by 21 small motors set in a skeletal head and neck. But those who spend time with Kismet find a creature who responds to their joy and sadness, their anger and humor, such an attentive listener that they often come away with an emotional bond.
Kismet is, however, a robota mechanical head that resides in a lab at the Massachusetts Institute of Technology. It "sees" with several cameras and "hears" sound delivered through a microphone. And yet its power to read emotions in people, and to express emotions in response, offers what may be a glimpse into the future of relations between humans and machines. While it is a future of amazing promise, scientists are beginning to wrestle with complex questions that inevitably emerge when machines appear to express even primitive human feelings.
Rosalind W. Picard, founder and director of the Affective Computing Research Group at MIT's Institute of Technology Media Laboratory, and Paul Root Wolpe, a bioethics expert at the University of Pennsylvania, explored the cutting-edge technology and the sometimes disquieting questions at a free forum sponsored by AAAS on 20 May in Washington, D.C.
"I was first interested in building computers that could be smarter in that they would be able to process auditory and visual information at the same time," Picard said in an interview before the lecture. To do this, she started researching how the human brain is able to process multiple senses simultaneously. "I kept bumping into lower level brain structures that help weigh things and that have a lot to do with human emotions. In a similar fashion, it looked like computers could be 'fixed' if they could also incorporate emotions."
According to MIT's Affective Computing Research Group, perhaps the most fundamental next-generation application will be a human interface whereby a computer can recognize, and respond to, the emotional states of its user. A user who becomes frustrated or annoyed with a product would "send out signals" to the computer, at which point the application would respond ideally in ways that the user would see as intuitive. For example, a computer piano tutor might change its pace and presentation based on naturally expressed signals that the user is interested, bored, confused or frustrated.
"So instead of an interface emotion going only one way, like smiling paper clips that pop up on your screen, it needs to be responsive to our emotions," explains Picard, the author of the award-winning 1997 book "Affective Computing." "If we send it an emotional signal saying 'We don't like this,' it should respond and offer the user the option to turn itself off or apologize."
During the lecture, Picard offered the example of a hypothetical learning companion. The robot would recognize that the student was making mistakes but enjoying the complexity of the problem. Because the student was exploring knowledge and not displaying outward signs of frustration, the learning companion would decide to refrain from interrupting with corrections.
Repeatedly, Picard cited the example of Kismet, a robot that's the project of Cynthia Breazeal, an assistant professor of Media Arts and Sciences and director of the Robotic Life Group at the MIT Media Lab. Kismet's motors and sensors are driven by 15 networked computers9 of them for vision aloneand four different operating systems. Breazeal has written that the array allows Kismet to interact with people "using para-linguistic cues such as facial expressions, body posture, vocal prosody, and gaze direction." And even though Kismet can't talk, it is capable of a pleasant sort of pre-verbal communication that conveys a range of familiar emotions.
Clearly, though, something more subtle is going on: By the mid-point of the lecture, the panelists were referring to Kismet not as a robot, not as "it," but as a "creature," as "he"as a seemingly sentient creature with an emotional IQ that's remarkable for a machine.
"Kismet is a simple skeleton with eyes, ears, nose and mouth," Picard said, "and yet, his appearance evokes a human response. Most people would feel very uncomfortable getting naked in front of Kismet."
In that reaction and others like it, there's evidence of a dramatic possible change in the relationships between humans and machines. "If my 5-year-old loses too many games of tic-tac-toe in a row, and I sense her frustration, I might decide to let her win one," said Connie Bertka, director of the AAAS Dialogue on Science, Ethics and Religion. "Maybe my computer will be able to do the same for me. But do I want it to be able to make that choice?" The Dialogue on Science, Ethics and Religion seeks to help the public generally, and the religious communities particularly, understand advancements in science and technology, such as affective computing, and to use this understanding as the foundation for exploring the ethical and religious implications of these advancements.
Already, said Wolpe, we've seen the relative success of cochlear implants, advances in virtual reality, and the development of the first brain prosthetic (still in testing stages in mice). All of which may auger deeper changes in our interaction with machines.
"We're turning a corner in terms of trying to understand our relationship to technology," Wolpe said. "Biotech know-how and cyborg technology is going to reach such an altering moment, and the nature of humanity will be so altered, that we cannot begin to guess what 'human' will be like in 2150."
The technology may well have practical uses. "Brain imaging technology is now able to identify subjective states of human being. It might evolve into a new 'lie detector,'" he said. "The technology is especially proficient in detecting guilty knowledge, with blood flowing to specific areas of the brain when people are hiding information."
But if machines can simulate emotionif they can sense our pleasure and pain and respond in a familiar emotional languagethen human nature suggests that closer attachments will be formed, he told the audience.
He used as an example a child's penchant for developing strong bonds with inanimate objects such as toys, dolls, even blankets. That capacity stays with us as we mature into adults, he said, and it's almost certain that a machine could generate such feelings of attachment again.
"Any future technology that can evoke a relationship will work," Wolpe said. "We invest in things that induce an emotional response from it, and we'll make that investment even though the object doesn't exist like a human does."
But, asked one member of the audience, is simulation of emotion enough? Or do we need to know that the creature actually feels? And how will we ever know if and when robots have an inner, subjective life?
"Although the machine will be able to recognize emotion," said Picard, "you must remember that it won't have access to your innermost feelings."
Wolpe noted that the line between animal and machine has already begun to blur. For example, he cited recent experiments in which scientists placed probes in the brains of mice and thereby rendered their actions subject to computer control.
"There's still a controversial and sometimes nasty argument over whether or not animals have consciousness," he added. "When you think about consciousness in robots, keep in mind that we still can't define whether a chimpanzee has consciousness. We'll never be able to find out definitely in computers."
Edward W. Lempinen
28 May 2004