Skip to main content

Anita Nikolich Wants the Government to Learn How to Hack AI Algorithms – to Make Them Safer

Neil Orman/AAAS

Anita Nikolich works in what she calls “the gray space” between academia, government, industry, and the non-traditional artificial intelligence “underground” (e.g., hackers). Nikolich is the director of research and technology innovation at the University of Illinois Urbana-Champaign’s School of Information Sciences, and is bringing together AI-related work at the university under one umbrella. But a significant chunk of her time, and a main focus of her public engagement during her AAAS Leshner Leadership Institute Public Engagement Fellowship, is as co-lead of the DEF CON AI Village (hear more from Nikolich about her work and about AI and misinformation in this 6-minute video produced by AAAS).

DEF CON started almost 30 years ago as a hacker event and is now one of the world’s largest security conferences. The AI Village is one of its interest groups, “a community of hackers and data scientists working to educate the world on the use and abuse of artificial intelligence in security and privacy.” Through this, Nikolich engages with the public at their annual event in Las Vegas, which draws thousands of AI-interested people, from kids to Congressional representatives. In 2019 they created a deep fake of the Democratic National Committee’s Chairman Tom Perez, with his permission, eventually leading to a Congressional briefing on the topic. This past August the Policy Village, which she’s also involved with, coordinated with the U.S. Department of Homeland Security to bring another policymaker delegation through.

Over the course of her AAAS fellowship year, Nikolich arrived at the idea of using her connections and understanding across these different spaces to bring together AI Village security experts with biohackers (a biohacker applies science and technology to try to improve or change the way their body functions) and FDA regulators, to work toward regulations that would make AI-based medical devices more secure. She explains that, much like the AI-based components of a Tesla car, companies are not required to prove these devices are safe and secure, and regulators aren’t always sure what questions to ask. “The fellowship gave me confidence to say, ‘here is what we want to get done,’” she says. “This is the audience, these are the people we need to get behind us… The focus on defining our messages has helped.”

Nikolich wants to help prevent hacking by bringing in people who know how to hack the AI algorithms, instead of just doing it on a theoretical basis. She also wants to advocate for AI to be auditable. “If a doctor uses an AI-assisted program to make a diagnosis, and you can’t tell which data went into training and creating the algorithm, you can’t tell if the output is safe for the patient,” says Nikolich. She also led the writing of a policy brief on data security with several other Leshner Fellows for a virtual visit to Capitol Hill in June, coordinated by the AAAS Office of Government Relations.   

The idea of attacking AI algorithms in order to figure out how to defend them made its way into a project Nikolich worked on through the AI Village journal club she co-leads (they stream the sessions on Twitch, and anyone interested in AI can join – and many do). Researchers from Microsoft and IBM participate in the journal club, and they published a framework for this approach to finding AI’s vulnerabilities.

Another side project Nikolich worked on over the past year was co-developing a COVID-19 misinformation tracker, Project Domino, which she and her colleagues piloted with the state of California. Their dashboard successfully showed different hotspots for where misinformation was being shared on social media – which they hoped would lead to major platforms like Twitter taking some of these posts down, as well as to meetings with community leaders who could influence people sharing the misinformation to stop. While these outcomes didn’t come to fruition, Nikolich and her collaborators have been using the tool as an example of a crowdsourced community science project in other talks they’ve been giving to encourage changes in licensing models for data processing tools and platforms.

Nikolich notes that ‘data science for good’ efforts tend to have trouble accessing the best tools. People are often doing it in their spare time and thus, even if they might have access to top-notch tools through their day jobs, they aren’t allowed to use them for other community science or public participation in scientific research projects they might be working on. She and her colleagues have spoken about this with a variety of companies, contributing to greater awareness of what can be done with their tools if they allow them to be used in a wider range of contexts, across different projects, people, and countries.

Moving forward, Nikolich is piloting a disinformation game, along with another AI researcher, that is aimed at both kids and senior citizens. They are piloting it with the Buffalo, New York school system. Their goal is to help these groups identify disinformation and what to do with it. In her university role, she will continue helping to develop a strategic vision for AI research at their university, including a serious effort to engage with others outside the university. They will be creating a new community data clinic as part of this, as well as adult education and training programs.

“I learned so much from our small group,” says Nikolich, referencing the subset of Leshner Fellows she met with monthly over the course of the year.  “I got really good, thoughtful feedback and it has helped me start to narrow my focus.”

The AAAS Leshner Leadership Institute was founded in 2015 and operates through philanthropic gifts in honor of CEO Emeritus Alan I. Leshner. Each year the Institute provides public engagement training and support to 10-15 mid-career scientists from an area of research at the nexus of science and society.