The humanoid robots came in colors like black and white, gyrating and shaking their limbs on a tabletop. When Ayanna Howard and her colleagues asked people to describe the emotions being expressed by the robots, people said, "'Oh, I think it's dancing, I think it's happy,'" Howard recalled.
But when Howard's team asked people to identify the robot's race as well, perceptions changed. People began describing the robots — who were going through the same motions as before — as being "angry" or "scary."
The experiment showed how software and artificial intelligence developers can sometimes "prime" a person's interactions with technology, introducing or increasing biases, Howard said in her topical lecture at the AAAS Annual Meeting.
Developers have an obligation to think deeply about how they train artificial intelligences and how they may be introducing biases into the technology they create — especially since humans sometimes react much differently to robots and AI than we might expect, she said.
"We're taking our own unique biases — some of them are positive, but some of them are negative — and we are encoding them into our creatures," Howard said. "We are encoding them into our software and algorithms."
"I do not think that we can create a bias-free system," she added. "What I do believe, though, is that we can create a system that can help us identify our own biases and then fix itself."
Howard is the Linda J. and Mark C. Smith Professor and Chair of the School of Interactive Computing at the Georgia Institute of Technology. Her career focuses on how intelligent technologies function within human-centered interactions, including AI, robotics and assistive technologies.
Bias can impact these technologies in a variety of ways, Howard noted during her talk. In another study similar to the dancing robots experiment, for instance, researchers looked at how people might react to male, female and neutral gendered robots that were assigned to occupations that can be gender stereotyped. Would people react to and engage more positively with a female gendered nurse robot, for example?
"And what we found was that there was actually no difference … it did not matter what gender people identified the robot as, whether it was male or female or neutral," Howard said. "They treated and they interacted with the robot exactly the same."
AI developers have assumed in some cases that gendering robots would enhance their interactions, but robots "don't have to inherit our human biases," such as whether a nurse is expected to be female, Howard suggested.
"The fact is that we interact with these robots as robots, period. And when we add in this aspect of gender, what we're actually doing is increasing the biases, augmenting the biases," she said.
Among her many projects, Howard develops AI-powered robots that work with children with disabilities to encourage exercise. Even though the robots are trained using data gleaned from human-human interactions — and the developers strive to make the experience as similar as possible to working with a human clinician — children and adults both say they prefer working with the robots.
Research shows that people have a cognitive bias of "machine knows best" when working with robots or other intelligent systems, Howard said. "They interact with these systems and they just go into, 'oh, the system knows what it's doing, I'm just going to defer and let the system lead me on.'"
In a Q&A session after the lecture with Jessica Wyndham, director of the AAAS Scientific Responsibility, Human Rights and Law Program, Howard said it may be necessary in the future for intelligence systems to come with a "package insert" like those accompanying medicines.
"I think we need to have an equivalent type of method for these algorithms, especially when they're deployed in scenarios that impact our civil liberties" such as face recognition software, she said. "Right now we have no idea how these systems are trained and what data they've been trained on."
Howard also noted the need for a more diverse developer corps and a broader range of human interactions and emotions to be included in intelligent systems. One system will not work for all people, she stressed. "Take the uniqueness of what makes us special-whether it's across gender, whether it's across age, whether it's across race-and design that specialness into the algorithm."
"We as people are diverse and we're different and it makes us unique and beautiful, and our AI systems should be designed in such a way," Howard said.