Skip to main content

AAAS Project to Promote Socially Responsible Artificial Intelligence in Healthcare Kicks Off with Public Opinion Work

Artificial intelligence is being used in contact tracking apps to track COVID-19 cases.
Contact tracing apps use artificial intelligence to track the spread of disease.
Photo credit: Photo by John Cameron on Unsplash.

AAAS is launching a project to support the responsible development and deployment of artificial intelligence (AI) in healthcare contexts, and specifically in public health crises like the COVID-19 pandemic. Today the project released a landscape assessment of existing public opinion work in this area compiled by a team of researchers from the University of Wisconsin-Madison.[1] This report summarizes what we know about public views of the use of AI in healthcare and in areas affecting the pandemic response, with an emphasis on understanding the concerns of populations most vulnerable to the negative impacts of AI technology.

“By understanding the views of people who might use or be affected by new technologies, and seeking out diverse voices that include marginalized communities, we aim to deepen conversations about the implications of these technologies for civic society,” said Emily Therese Cloyd, director of the AAAS Center for Public Engagement with Science & Technology. “Such conversations can reveal questions and concerns about the ways technologies are designed and applied before they become problems -- or light the way to new approaches that hadn't been considered previously.”

From containment of the virus to drug development, AI-driven healthcare applications have played an important role in the COVID-19 response. AAAS identified three areas where these applications could have an outsize impact on privacy and human rights, or pose a greater risk of potential abuses: surveillance (e.g., contact tracing), medical triage, and allocation of resources. The landscape assessment released today reviews public opinion on the AI landscape more broadly while also seeking to understand the concerns of vulnerable communities in particular, including in the context of the three areas identified. 

The landscape assessment, intended as a baseline to show where additional research may be useful, found very little existing survey work on public opinions of AI in the United States. Surveys that do exist do not provide breakdowns by demographic, although the researchers did some of this analysis as part of their work. The landscape assessment recommends that future work “pay special attention to minority groups and groups that are traditionally digitally marginalized, as these groups are more susceptible to negative outcomes of AI usage and applications (for example, racial bias embodied in AI systems).”

Findings from the assessment also flagged as a gap that polls “tend to focus on specific, sometimes very hypothetical or obscure-seeming potential applications of AI, making it difficult to piece together an overall picture of public opinion regarding AI in general and in healthcare contexts in particular.”

AAAS intends to incorporate the results from this landscape assessment into a responsibility framework for creating and using AI in ways that alleviate, rather than exacerbate, any negative impacts of the technology among vulnerable and marginalized populations. AAAS will develop and disseminate this framework in collaboration with representatives of these groups as well as with AI experts and developers, social scientists, and policymakers.  

To follow up on the landscape assessment, AAAS is working with researchers from Marquette University and the Medical College of Wisconsin[2] to convene focus groups and gather more qualitative information from specific populations. The focus groups will be held with Black, Latino, Southeast Asian, and First Nations communities in the Milwaukee area.

“Our aim is to engage with these communities to gain a clearer picture of their general understandings of AI and its range of uses within the healthcare context, their reactions to the potential uses of AI technology in response to the COVID-19 pandemic, attitudes about the use of personal health data in the development of AI, and views about whether they feel AI might help (or, perhaps perpetuate) health disparities in their respective communities,” says Michael Zimmer, lead investigator.

In the coming weeks, the project will also release a report summarizing the uses of AI in the COVID response, and focused on the three applications identified (surveillance, triage, and resource allocation.) In later stages, AAAS intends to work with community leaders, AI experts and developers, social scientists, and policymakers to develop and deliver written and digital materials and direct engagement activities to promote conversations between scientific experts and diverse publics. Engagement activities may include online discussion forums, virtual workshops, town halls, webinars, and other appropriate convenings.

The Artificial Intelligence (AI2): Application/Implications Initiative was launched with support from Microsoft. The project is guided by an advisory group of multidisciplinary experts. Those interested in contributing to this work may do so at: https://www.aaas.org/ai2/donate.

 

[1] University of Wisconsin, Madison research team: Todd P. Newman, Emily L. Howell, Luye Bao, Becca Beets, and Shiyu Yang.

[2] Michael Zimmer and Praveen Madiraju (Marquette University) and Zeno Franco (Medical College of Wisconsin)