Skip to main content

Mitigating AI Risks In State Government

Mitigating AI Risks in State Government 

State officials are leading the way to identify and tackle AI risks – from misinformation to data privacy concerns. In a December 2023 installment of an ongoing webinar series, the NGA Center for Best Practices in collaboration with the AAAS Center for Scientific Evidence in Public Issues brought together state leaders and national experts to examine major categories of AI risk in state government and highlight promising practices state governments are implementing to ensure AI technologies are deployed successfully and responsibly. 

The Reality of Current AI Harms

Alexandra Reeve-Givens, CEO of the Center for Democracy and Technology, provided an overview of various types of AI risk state governments are confronting. Categories of risk include: security and surveillance, education, consumer fraud and abuse, commercial data practices, benefits and public health, information harms and elections.

Illustrating the real-world impact of AI hazards, Reeve-Givens shared examples related to automated administration of public benefits systems – such as unemployment insurance, SSI and Medicaid. While the use of automation in benefits systems isn’t new, its increasing use over the past 10-15 years has uncovered risks that significantly impact citizens’ rights and expose state government agencies to litigation. Reeve-Givens also cited examples of AI risks in the criminal justice system, including wrongful arrests based on faulty facial recognition technology, as well as bias in AI systems used to make decisions regarding bail and probation. 


Elements of Trustworthy AI

As state and federal policymakers grapple with AI implementation, several guideline frameworks emerging in the past year pinpoint common elements as essential to ensuring trustworthy AI. Several state plans have drawn on recent frameworks like the White House Blueprint for an AI Bill of Rights and the U.S. Commerce Department’s National Institute of Standards & Technology’s (NIST) Artificial Intelligence Risk Management Framework.

One of the latest models is a Proposed Memorandum for Federal Agency Use of AI released in November 2023 by the Office of Management and Budget (OMB). Reeve-Givens outlined key elements of OMB’s guidelines that can be helpful to states. 

Mandate Risk Management practices: Before developing or deploying an AI tool, it is critical to determine if the AI system impacts rights or safety. If it does, states should require minimum practices, such as completing an AI impact assessment, testing performance in real-world contexts, independently evaluating the AI, conducting ongoing monitoring and specifying a threshold for human review, and ensuring adequate training for operators. For AI systems that have been determined to impact rights, additional minimum practices include testing for equity and nondiscrimination (pre- and post-deployment), consulting impacted groups, and notifying impacted individuals when AI meaningfully influences the outcome of decisions concerning them.

Require reporting & documentation: States can increase accountability and understanding around AI by directing agencies to inventory their uses of AI, designate which uses impact rights and safety, and issue templates for reporting outcomes in high-risk use cases so that both internal operators and the public will have access to and understanding of AI uses and impacts. 

Take specific steps on procurement: The procurement process is instrumental in shaping AI risk management, and OMB outlines several steps to ensure government agencies ask the right questions and ensure responsible use of taxpayer dollars. An effective procurement process should include the ability to test the technology, ensure government agencies retain sufficient control and ownership over data, ensure quality control, privacy and security, and maintain adequate access and visibility to ensure due process requirements are met.


How States Are Managing AI Risk

Numerous states have established advisory committees and working groups to study AI and issue guidelines. As state efforts expand, their findings continue to identify risks and develop best practices to both mitigate risks and capitalize on opportunities.

Washington state launched an AI Community of Practice, which includes both state and local government agencies, to facilitate collaboration, identify best practices, enhance accountability and oversight, and promote alignment of new AI technologies to business and IT strategies. Washington state is building on interim guidelines for the responsible use of generative AI, which the state published in August 2023, which highlights several “dos and don’ts” for generative AI use, including: do review content for biases and inaccuracies before using AI-generated audiovisual content; do implement robust measures to protect resident data; don’t include sensitive or confidential information in prompts; when using chatbots / automated responses, don’t use generative AI as a substitute for human interaction, and do provide mechanisms for residents to easily seek human assistance if the AI system cannot address their needs effectively.

In Virginia, Governor Glenn Youngkin issued an Executive Directive in September 2023 directing the state’s Office of Regulatory Management (ORM) to coordinate with the Virginia Information Technologies Agency (VITA) to develop standards and guidelines to ensure effective oversight of AI technology across four focus areas: legal protections, policy standards, IT safeguards, and K-12 and higher education implications. The effort generated findings in late 2023. ORM and VITA were also tasked with identifying pilot projects – both internal and public-facing — that can be implemented to test the standards and make government services more efficient and effective. Examples include use of chatbots to more efficiently administer government services, as well as a potential pilot project to use AI to help the housing department analyze 700,000+ building codes active in Virginia in order to identify overlapping requirements with a view toward streamlining regulations.


National Governors Association