As artificial intelligence (AI) continues to evolve, its potential social impact and policy applications merit in-depth analysis. Building on its strength as a pioneer in exploring the social, ethical, and legal implications of emerging technologies, AAAS created the (AI)2 Initiative.
The initiative, funded in part by Microsoft, is an organization-wide collaboration among multiple AAAS Programs, including the Center for Scientific Evidence in Public Issues (EPI Center); the Scientific Responsibility, Human Rights and Law Program (SRHRL); the Center for Public Engagement with Science and Technology; the Dialogue on Science, Ethics and Religion (DoSER) and others.
“The wisdom of every one of our networks of thought leaders across AAAS should be engaged in the dialogues needed to leverage emerging technologies for the benefit of all,” says Theresa Harris, a Project Director in SRHRL. “We set out to build the structures needed to pursue this ambitious, cross-organization agenda. This meant an internal working group and an external committee of advisors. It also meant finding a starting point, which we decided would be health care.”
The COVID-19 pandemic reinforced the initiative’s decision and in May 2021 it released “Artificial Intelligence and COVID-19: Applications and Impact Assessment.” In cataloguing the many and varied ways that AI has been deployed in the current public health crisis, the report demonstrated how abstract concepts have real world applications, such as how the technology was used to forecast the disease's spread and contact tracing. It also identified specific practices of AI that raise ethical and human rights concerns.
“It is not clear that all AI tools that have been deployed in the context of the pandemic are capable of addressing the public health needs for which they have been used,” explains Jessica Wyndham, the Director of SRHRL. “More attention must be given to assessing the data used by these tools to address concerns of bias. A question for further exploration also relates to how humans use the inputs provided by AI-based tools in their decision-making. We recognize that most of these points are not unique to the pandemic context. They are relevant to the development and deployment of AI, generally.”
This type of ambitious work to fully understand how AI will affect different aspects of life would not be possible without external support. “Microsoft is enthusiastic about the work of AAAS and its cross-cutting AI Applications/Implications effort,” says Dr. Eric Horvitz, Chief Scientific Officer at Microsoft and AAAS Fellow. “The timely study on uses of AI to address challenges with the pandemic focuses attention on how technologies designed to diagnose, predict, triage, and optimize can amplify existing social inequalities. As new technological tools emerge, it is imperative to have trusted organizations like AAAS examine and provide guidance on societal and ethical implications of the technologies and their applications.”
Building upon this body of research, the (AI)2 Initiative recently received a grant from the Ford Foundation to address in tangible ways the conversation about ethical and human rights principles associated with AI. With the additional funding, the initiative will develop approaches and models to help the technology community—in industry and academia—use the power of AI for social good. While the bulk of the initiative’s work will take place in the U.S., it will also involve industries that are global in their reach.
The goal is to keep universal values like equity, accountability and fairness at the core of AI-related projects, making sure voices from impacted communities will be a part of all design, development and assessment processes. To accomplish this, the (AI)2 Initiative plans to develop and pilot a “Public Interest AI 101” course for developers, build leadership capacity for justice-based collaborations, partner with nongovernmental organizations to identify their most urgent needs for AI-based technologies and foster the integration of the values of technology for the public good into the infrastructure of government, academia and the scientific community.
“From a global pandemic response hampered by the spread of misinformation promoted by harmful algorithms, to discriminatory practices that play out in policing and lending based on harmful predictive technologies, every day we see more clearly how technology intertwines with human rights issues,” notes Michelle Shevin, a Senior Program Manager with the Technology and Society team at the Ford Foundation.
“AAAS knows that technology is never neutral, and their critical efforts through the (AI)2 Initiative work to ensure the development and application of artificial intelligence technologies are undertaken in ways that alleviate, rather than exacerbate, social inequalities. This mission cuts to the heart of public interest technology, and the field's goal of advancing equity, expanding opportunity and protecting basic rights and liberties.”
Through its efforts, AAAS plans to build a community of scientists, engineers and medical professionals with the skills and knowledge necessary to advocate for the responsible development and application of AI. AAAS Members, across disciplines, interested in mobilizing to that end are encouraged to contact SRHRL to learn more.