Skip to main content

Government + AI = Responsibility: AAAS’ EPI Center’s Epic Journey to Secure Tech's Future

In a data-hungry society, artificial intelligence (AI) is a welcome solution to help sift through large datasets more quickly and efficiently than any human possibly could. Local and state governments that have adopted AI to support public services have seen the benefits of the technology – and yet, case studies also illustrate that AI can be profoundly harmful when it is founded on biased or faulty algorithms, underscoring the need for governments to proceed carefully with the technology.

To support governments in assessing their AI options and choosing approaches to assist in serving the public, AAAS’ Center for Scientific Evidence in Public Issues (EPI Center) is launching new initiatives and tools. This includes a novel initiative called Responsible AI Use in Local and State Government, a project which aims to connect policymakers and decision-makers with the right experts regarding the technical, social and ethical aspects of AI.

AI is an umbrella term for systems built using data and statistical methods to find patterns in historical data that can be applied to new cases, essentially learning as it goes.

Kate Stoll, a Project Director at the EPI Center, is helping to implement the project. She can cite numerous examples of when AI programs adopted by the government have benefited society, like when they potentially save lives or protect infrastructure. These include AI models that analyze satellite or video data to detect wildfires sooner, or those that identify roads in need of repair.  

“But there are also a lot of what you would call ‘scary’ case studies, or examples of when things didn’t go right,” she cautions.

In the mid-2010s, about 40,000 people were falsely accused of fraud by the state of Michigan's unemployment insurance agency because of an error-prone automated software system built by a private company. More recently, it was found that the IRS was auditing Black taxpayers at three times the rate of other taxpayers due to an AI system.

There are several different measures that can be taken to reduce these biases and errors. However, Stoll notes that local and state governments may have limited resources and technical expertise available to ask the right questions when considering adoption of a new AI system.

“In some cases, these AI technologies are so cutting edge that they aren’t mature enough for certain high-risk applications,” she says. “So, I think there’s an eagerness on both sides – experts and decision-makers – to be sharing and learning from each other, and that’s where EPI Center can foster those relationships.”

Gold & Blue AI Image
AI Machine Learning Image

As part of the Responsible AI Use in Local and State Government project, the EPI Center has partnered with the National League of Cities and the National Governors Association to connect state and local leaders with AI experts to talk about the opportunities and risks of AI. The idea is to create an environment where they can learn from each other about responsible AI practices.

Stoll notes that AAAS – including through its EPI Center – is particularly well positioned to support these collaborations. Not only does the center specialize in creating bridges between policymakers and scientists, it also engages multidisciplinary experts.

“AI is such a general purpose technology that’s being deployed in practically every sector at this point, so we need more than just computer scientists to be thinking about these questions and talking with policymakers,” explains Stoll, emphasizing that AI applications also involve social and ethical considerations, thus requiring input from ethicists and experts in the social sciences too. “That is one of the reasons AAAS is well suited to be working in this space.”

Additionally, AAAS has produced several tools that can be used to facilitate better understanding use of AI as part of its AI: Applications and Implications (AI2) Initiative. These tools include a report on Foundational Issues in AI and a glossary of AI terms for a non-technical audience, which were developed by the Center for Scientific Responsibility and Justice. There is also a Responsible AI Decision Tree to help users think through the many questions raised by AI use and whether or not it is the right tool for the problem at hand.

Stoll also encourages governments to use the National Institute of Standards and Technology’s recently published AI Risk Management Framework, which supports AI users in taking pre-emptive measures to manage AI risks.

The auditing and monitoring of automated systems once they are in the field is another important aspect of responsible AI use, says Stoll. “It’s good to go back and check once you’ve deployed the system to say, okay, is the system working as we expected? And if not, how do we need to tweak it? Or in the end, do we decide it’s not worth it and we’re not using this system?”

Stoll says that another common recommendation from experts is to engage communities that will be impacted by AI use throughout the entire process of choosing, developing and deploying the AI system.

Moving forward, Stoll plans to continue exploring ways to help local and state governments ask the right questions of AI vendors, like those regarding the training and accuracy of their AI system, its applicability to the target community and the potential risk of unfair bias against certain protected groups of people.

“Truly reducing that information gap between the vendor and the local and state government decision-makers would be a huge and satisfying accomplishment,” Stoll says.

Date
Blog Name

Author

Michelle Hampson

Related Focus Areas