After years working on artificial intelligence (AI) at IBM and elsewhere, new Tulane University assistant professor Nicholas Mattei has focused much of his efforts during his AAAS Leshner Leadership Institute Public Engagement Fellowship on introducing his data science students to community-engaged scholarship. Undergraduate students at Tulane are required to take a service-learning course, yet there aren’t many options for students in computer science to fulfill this requirement within their field. In his course, Mattei hoped to help his students learn about what the local New Orleans community is interested in, and what data they might analyze in ways that could contribute to future community action. This builds toward the goal of helping future AI researchers engage with the public in mutually beneficial ways, so scientists understand what the public wants from technology, and the public has a realistic understanding of technology’s limits and its possibilities.
Mattei had intended to kick off this service-learning data science course with a community design project: in collaboration with Tulane’s well-established public service and human-centered design centers, he and his students would work with community members to determine what questions are of interest to them, and develop projects that fit within the students’ timeframes. Mattei notes that sometimes “there is a tension between the goals of a high impact project and pedagogical goals for the student,” which tend to be shorter-term in order to be completed within a semester.
Because of the COVID-19 pandemic, his students instead worked with the leaders of two local groups focused on using data and AI for community impact, Lamar Gardere from the The Data Center and Ryan Harvey from Code for New Orleans, to develop their projects (Mattei still hopes to hold a community workshop with his next group of students in fall 2021). Although this approach felt less directly connected with the public than Mattei had hoped, “the students [still] loved it,” he said. One group of students investigated whether there was a change in 911 calls due to the coronavirus pandemic (they increased); another analyzed the number of restaurants that had permanently closed, which didn’t seem to be widely known at the time.
Students learned that in many cases, data are not as readily available as they might have assumed, and that this limits what AI techniques and conclusions can be achieved. The leads of the two local organizations provided an important reality check on this aspect of their projects and helped them make their research questions more meaningful. Several students have expressed interest in continuing their projects as part of their senior capstone projects that computer science majors are required to complete.
In addition to this course, Mattei is collaborating with two other computer scientists, Judy Goldsmith and Cory Siler; an ethicist, Sara-Jo Swiatek; and a religious studies scholar, Emanuelle Burton, to write a textbook that uses science fiction to discuss technology ethics, which the group intends to use as a springboard for broader dialogue and engagement in the future. The project arose out of the group’s frustration about discussions of AI ethics that place technology development outside the context of society, rather than embedded in society. “We all see technology not as this “thing” that exists apart. AI is not this thing that exists apart,” Mattei says. “Society has needs and desires, and people build things to address those. And that changes what people want. It’s a push and pull.”
Mattei says that often, technology ethics education stops at telling students, “Don’t do bad things.” In reality, decisions are more complicated than that -- and involve not one decision, but rather, being continually aware of the small design decisions made along the way toward addressing a problem. Building on past examples that used fiction in similar ways, the textbook will employ science fiction stories to help learners temporarily separate themselves from moral and ethical questions, in order to consider the decision-making process itself. Mattei and his colleagues further describe the approach in this article in Communications of the ACM (Association for Computing Machinery). These moral and ethical sides of technology are the kinds of questions the AAAS Leshner Leadership Institute encourages fellows to discuss with diverse audiences, to help scientists gain other perspectives on how to collectively make decisions that affect society – as part of that “push and pull” that Mattei describes.
During the fellowship program, fellows are also encouraged to think about ways to better institutionalize public engagement – by creating incentives, or building capacity through training, for example. As the vice president of the ACM Special Interest Group on Artificial Intelligence, Mattei intends to set up a joint award for public engagement with another major professional society in artificial intelligence, the Association for the Advancement of Artificial Intelligence (AAAI), where another Leshner Fellow Brian Scassellati is supporting the effort. “People have been really receptive to it,” Mattei says.
The AAAS Leshner Leadership Institute was founded in 2015 and operates through philanthropic gifts in honor of CEO Emeritus Alan I. Leshner. Each year the Institute provides public engagement training and support to 10-15 mid-career scientists from an area of research at the nexus of science and society.