Skip to main content

Bill Smart Brings Technical Expertise in Robotics and AI to Discussions about Regulation

Graduate students Austin Whitesell and Alan Sanchez and Bill Smart, professor of robotics, demonstrate an experiment simulating how robots might be used during Ebola outbreaks.
Bill Smart on far right with graduate students Austin Whitesell and Alan Sanchez in 2019 demonstrating an experiment simulating how robots might be used during Ebola outbreaks.
Photo credit: Oregon State University

Bill Smart builds robots and researches human-robot interactions. He is also very interested in the policies that guide and regulate technology. Sometimes, he says, policymakers, like other members of the public, may expect too much of technology and give it more responsibility - and presumed objectivity - than it deserves. These and related concerns are being raised in many discussions about the ethics of artificial intelligence (AI). This AAAS video series on responsible AI tackles some of these questions, as does this keynote lecture from Ruha Benjamin at the 2021 AAAS Annual Meeting and a recent book by Kate Crawford, which seeks to demystify the “myth of AI” (i.e., its superhuman powers). Addressing these misperceptions and their implications are part of what motivates Smart – and many others in the 2020-21 cohort of AAAS Leshner Leadership Institute Public Engagement Fellows -- to communicate about their work.   

“The concern is that if you expect too much of it, it becomes a thing that will solve your problems without you thinking about it – it gives you permission to just follow orders,” says Smart, a professor of mechanical engineering and robotics at Oregon State University. Law enforcement, for example, might want to shift some of their decision-making about who to arrest onto the technology, rather than using it as one tool in their toolbox for making good decisions. At its core, AI is essentially just algorithms, he says, and can be just as biased as the humans who create the algorithms or the data used to “train” them. Yet people may think, “well, the computer gave me this answer, so it must be right.” 

As part of his fellowship, Smart wanted to have conversations with government agency representatives about regulation of AI, and of robotics in particular. The COVID-19 pandemic slowed this effort, as did work associated with also becoming an Amazon Scholar. However, he and his wife, researcher Cindy Grimm, wrote an article for the Brookings Institution about some of these issues that will be published soon. Smart says it was a very different experience writing for this kind of platform and audience – including having someone else editing his work. He hopes the article might be the basis for organizing a conversation that includes policymakers at an upcoming Association for the Advancement of Artificial Intelligence meeting.

Smart is not new to engaging on these topics – for years he has participated in the We Robot conference, which brings together legal, policy, and technical experts in robotics. The conference accepts only a small number of papers each year, often co-written by legal and technical experts. Smart says it is an incredibly useful place to test ideas and material. He ran an interactive game at one recent meeting, and found it was a “really sticky” way to present the information – people referred back to the game throughout the rest of the conference.

Smart and Grimm received NSF funding to test the use of interactive demonstrations for engaging with non-experts about AI. He notes that it is often difficult to measure whether discussions are having any effect on people’s understanding of AI, and what kinds of messages or activities work best – a challenge many engaged in science communication may relate to. Their grant will allow them to assess this at Georgetown Law School by splitting up a technology law and policy class and holding interactive demos about collaborative robots with one group, but not the other. At the end of the semester, they will compare the students’ understanding of the technology through written case studies – and again at the end of the following year. 

In addition to doing their own public engagement activities, the AAAS Leshner Leadership Institute also supports fellows in taking steps toward institutional changes that build capacity for public engagement within the institutions and communities they’re a part of. Smart intends to work on providing more opportunities for graduate students in his robotics department to fulfill their service requirement through public engagement. He also wants to get funding for some to participate in science communication training offered by the Oregon Museum of Science & Industry, which he completed shortly before his AAAS fellowship. 

Smart says the fellowship helped motivate him through the hard year of the pandemic -- monthly check-ins with a small group of other fellows and AAAS staff encouraged him to keep moving things forward, and helped him realize through his own report-outs that he was often doing more than he had acknowledged. The program also focused his thinking about the goals of his public engagement ideas and their likely impacts, leading him to think about where he wants to focus his energy. Referencing a recent discussion with Sean Gallagher, a government relations expert at AAAS, in preparation for upcoming virtual visits to Capitol Hill, he says AAAS has also given him access to “people who know where the levers are hidden… they know which room they are stored in.” This has helped him understand the policy process better -- such as focusing on reaching staffers for a relevant committee, who may have more topical expertise and interest than congressional staffers, who tend to be generalists. 

Smart also really appreciated the other fellows in his cohort, who come from very different backgrounds despite their shared focus on AI, and he hopes to stay in touch – as well as, eventually, to meet in-person.  

The AAAS Leshner Leadership Institute was founded in 2015 and operates through philanthropic gifts in honor of CEO Emeritus Alan I. Leshner. Each year the Institute provides public engagement training and support to 10-15 mid-career scientists from an area of research at the nexus of science and society.