When Michael Littman turned 13 in 1979, he asked his parents for a computer instead of a bar mitzvah.
“There wasn’t any internet, but you could get a computer,” he says. “I grew up, at least for my early teenage years, thinking about things through the lens of a computer.”
It would be several years before he could send his first email. Now, forty years later, Littman is a professor of Computer Science and co-director of Humanity Centered Robotics Initiative at Brown University. His research areas include algorithm analysis, artificial intelligence, machine learning, reinforcement learning and robotics. He is also a 2020-2021 fellow at The Leshner Leadership Institute an initiative of the AAAS Center for Public Engagement with Science and Technology.
Littman’s experience follows the timeline of computing. What’s less typical is how much and how quickly things are changing, even for computer scientists like Littman. In the podcast he co-hosts, called Computing Up, Littman has focused on broader implications of computing, including multiple episodes addressing issues around fairness and ethics. This focus stems from a growing realization that the properties of certain algorithms used in artificial intelligence and machine learning can lead to decisions that are biased against certain communities.
“One reason that the data might indicate that a woman is less hirable than a man is that in the past the hiring decisions by people were biased,” he says, referring to a recently canceled project at Amazon UK that led to the development of a hiring algorithm that discriminated against women. “The algorithm didn’t introduce this idea, it was trying to copy the idea from the historical data which is biased so the algorithm dutifully copies that bias.”
Littman is quick to call out influences that affect the properties of some algorithms, or what computer scientists call hyperparameters, in the process of fine-tuning a machine learning algorithm.
“People who are running these algorithms... if they are expecting certain things and they are not getting them, they will continue to tweak the algorithms until they get what they are looking for,” he says. “That tweaking process is how implicit bias is sneaking in there I think.”
In the fight against these systemic and deep-set biases, Littman points to concrete steps from a new sub-area of machine learning, which can mitigate issues of unfairness, safety, and bias in algorithms—steps that anyone can advocate for.
“FATE is the sexiest acronym that exists: F is fairness, A is accountability, T is transparency, and E is ethics,” he says. “We want to assess an algorithm not just on accuracy, not just on computational efficiency, we want to assess an algorithm based on whether it is making fair decisions.”
As machine learning and artificial intelligence take center stage in causing/influencing societal issues, Littman continues to speak to the ways the ramifications of technology need to be front and center in our public discourse.
A multifaceted performer, Littman has over the years been producing underground music to reach new audiences—creating parodies of song videos on YouTube. “My first computer science song was created as an end-of-semester summary for an AI class I taught at Princeton,” he says. “I have tried to write one song per large lecture-class ever since. My inspirations are School House Rock which I adored growing up and Weird AI who showed that it’s possible to be a genius lyricist without knowing how to write new songs.”
His craft goes beyond singing, with a flair for acting, Littman appeared in a Turbo Tax commercial where he is first seen helpfully peering into a house and later walking through the door to offer his help to a woman working on her taxes. “Anna thinks you need a Ph.D. to do your own taxes,” comes the voice over. “We brought in Dr. Michael Littman to explain the complexity behind her refund.”
Late last year, Littman was recognized and awarded for his engagement efforts among his peers. He was selected together with several other scientists as Public Engagement Fellows under the AAAS Leshner Leadership Institute to work on communicating AI to the public. Since then, the combination of issues that included COVID-19, and renewed focus in combating racism in STEM, led to new challenges.
“We were selected late last Fall when the world was a different place,” he says. “The two biggest things that happened in 2020 also kind of landed on the fellowship... it just made it (the fellowship) bumpier, but it is really going well, I am learning a lot.”