Artificial intelligence (AI) can be skillfully deployed to address public health concerns. In the ongoing COVID-19 pandemic, for instance, AI can improve screening and testing and accelerate vaccine and drug development. Yet any incorporation of AI solutions in the public health realm must be accompanied by a stronger sense of trust in those technologies by patients and community members, particularly those historically underserved by the healthcare system, according to a new report released today.
“Public Opinion Research on Artificial Intelligence in Public Health Responses: Results of Focus Groups with Four Communities” was prepared for AAAS as part of the Artificial Intelligence: Applications/Implications (AI)² Initiative, by a team of researchers at Marquette University and the Medical College of Wisconsin. The initiative was launched by AAAS and its partners earlier this year to contribute to the responsible development and application of AI with a goal of eliminating rather than exacerbating social inequalities.
The report summarizes findings from virtual focus groups with four diverse and historically underserved communities in southeastern Wisconsin – African American, Hispanic, Southeast Asian and Native American (First Nation) – about public understanding and trust of AI technology used to address the COVID-19 pandemic.
“The project engages with these communities to gain a clearer picture of their general understandings of AI and its range of uses within the healthcare context, their reactions to the potential uses of AI technology in response to the COVID-19 pandemic, attitudes about the use of personal health data in the development of AI, and views about whether AI might help (or, perhaps perpetuate) health disparities in their respective communities,” the report states.
Based on these findings, the report also offers recommendations for implementing AI in public health crises like COVID-19, particularly in communities of color.
First, it is important to build literacy about the implications of AI through focusing on community benefits, the report’s authors said. Second, they advised respecting local cultures and remembering the human. Finally, communities should be involved in all steps of the process of implementing AI solutions, “from design to deployment,” the report says.
Many participants in the focus groups struggled with understanding AI or were unclear about how it could be applied in addressing COVID-19.
Said one participant from the Southeast Asian/Native American focus group, “I was trying to put the pieces together about how technology could affect how we deliver information about COVID-19. I just never really thought about that there could be a connection.”
The authors recommended first focusing on broad understanding of the community benefits of AI, then delving further into explanations about how such technologies work. After all, they noted, participants who showed greater technological literacy were more likely to focus on the positive impacts that technology has brought to their community.
Other themes that emerged from the focus groups were concerns about how AI might affect their communities – such as by exacerbating racial disparities, widening the digital divide or disregarding their culture’s relationship to technology. To alleviate such concerns, the report authors recommended that any future deployment of AI in communities of color must take into account the local culture and emphasize that the technology augments rather than replaces human interaction.
Focus group participants also recognized the importance of knowing who is behind AI solutions and understanding their motivations. To ensure that the goals of AI developers are in line with the communities they are serving, the authors also recommended the inclusion of community members through every step of the design, development and deployment process.
As one African American community member said, “I think figuring out how to proactively educate and communicate with the community is something that we have to keep in mind from the beginning and not on the tail end.”
The report is the third to be released this year as part of the (AI)² Initiative. A January report summarized the landscape of public opinion work about views of AI in public health contexts, and a May report highlighted several ways that AI has supported the pandemic response, with a particular focus on uses such as triage and resource allocation that “center around the ethical issues of an AI-based algorithm making decisions of vital human significance, and the possibility for bias or unequal treatment of people by the software.”
Upcoming work for the initiative will include convening stakeholders to create a responsibility framework for developing just and ethical AI-based medical applications.
Funding for the AI² Initiative was provided in part by Microsoft.