Artificial Intelligence (AI) as a tool of public health has proliferated since the world first sought to understand, control, and fight the COVID-19 pandemic. From the first indications of the pandemic in December 2019 and the early predictions of its spread and impact – by a team of scientists who presented their results at the AAAS Annual Meeting in February 2020 – to the deployment of AI in the development of vaccines, AI has played a central role.
As described in a AAAS report released today, “the COVID-19 pandemic provided … a unique opportunity to prove that AI could be harnessed for the benefit of all humanity, and AI developers seized the moment.”
The proliferation of AI applications during the pandemic has occurred at such a speed that oversight has been challenging, if not impossible. The AAAS report was prepared, at least in part, on the basis that there is still immense value in taking stock of the many ways AI has been relied on in the pandemic, so that lessons can be learned about the impacts of the applications, from a public health perspective and from a societal perspective.
The report highlights several ways AI has supported the pandemic response, such as re-purposing old drugs by using algorithms to reduce the number of chemical compounds that need to be considered. This was done for a rheumatoid arthritis drug that became one of the drugs used to fight the disease. In another example from early in the pandemic, the White House launched the COVID-19 Open Research Dataset (CORD-19) led by the Allen Institute for AI. Powered by AI, the website allowed researchers worldwide to quickly share knowledge and information about all aspects of research linked to COVID-19.
Jessica Wyndham, director of the AAAS Scientific Responsibility, Human Rights and Law Program and a co-author of the report says, “At the onset of COVID-19, there was a clear demand for using AI to fight the pandemic. However, no one was looking at the entire picture of how AI was in fact deployed and what ethical or human rights questions were arising from their implementation. We wanted to see the implications of these selected applications, paying particular attention to underserved populations. We wanted to see what worked, what didn’t and what we could learn from that for any future health crises.”
Concerns about AI in Medical Triage, Resource Allocation, and Surveillance
Because of the particularly acute potential for harm and abuse, the report focuses particularly on AI as a tool for medical triage and the allocation of resources, and in surveillance applications such as contact tracing.
Concerns about uses of AI in both triage and resource allocation center around the ethical issues of an AI-based algorithm making decisions of vital human significance, and the possibility for bias or unequal treatment of people by the software.
An investigation of such issues conducted in one healthcare system and detailed in the AAAS report found no evidence of bias, but recommended a larger sample to study, and highlighted the need to train algorithms used in medical triage on populations that reflect the ones for which they will be used. This topic is discussed in some depth in an episode of the AAAS Responsible AI series entitled “Medical triage during COVID-19.”
While medical triage has particular implications for individuals, contact tracing is a population-wide tool created to manage and prevent transmission of the disease. In the U.S., contact tracing through smart phones has been mainly based on the use of Bluetooth technology to estimate proximity and danger of exposure between people. The report explores both the potential technical limitations of contact tracing and risks for privacy and confidentiality. Similar risks exist in the current context in which “green passports” are being introduced in some jurisdictions and debated in others.
The report concludes with a series of research questions that respond to the knowledge gaps revealed in the report. This research agenda addresses technical issues concerning the AI applications explored in the report (how well do they actually work? And how effectively to they meet the need for which they were deployed?) as well as data-related issues (how are the algorithms trained? What are the variables used to help decision-makers exercise their judgement and what biases or inequity might be baked into those algorithms?).
In the coming weeks, the AAAS AI² Initiative will release a report sharing input from focus groups conducted with underserved communities in the greater Milwaukee area. The report will include their views about uses of AI technology during the pandemic and public health responses, especially related to triage, surveillance, and resource allocation; the collection of personal health data; issues of access to technology; and the potential for disparate applications and implications for their communities.
In the next several months, AAAS will convene a group of stakeholders who will draw upon this work to create a responsibility framework – a roadmap for developing and implementing just and ethical AI-based medical applications.
Funding for the AI² Initiative was provided in part by Microsoft.