Skip to main content

Artificial Intelligence Requires Thoughtful Policymaking, Experts Say

With appropriate policies in place, robots should become our “best friends,” not our “worst nightmare,” experts said at the 41st Annual AAAS Forum on Science & Technology Policy on 14 April.

During a panel, entitled “Best Friend or Worst Nightmare? Autonomy and AI in the Lab and in Society,” experts on artificial intelligence (AI) spoke about the role of policy in integrating new technologies into people’s lives. They both praised current AI advancements, and urged more policymaking in the arena of autonomous systems, particularly related to disaster relief, sustainability, and the military, among other applications.

The panel, co-organized by AAAS staff member Jonathan Drake and retired Vice President of Sandia’s California Laboratory Miriam John, urged a stronger focus on the promise of AI, rather than its perils. The panelists dispelled myths about unrealistic Terminator-like AI robots, and called instead for clear policies and regulations for impending technologies such as driverless vehicles.

“Some of these issues of ‘Robots, are they nightmares?’—this is just not a problem that we’re even able to concern ourselves with right now, with the kinds of technologies that I’m working with at the moment,” said Evan Drumwright, a professor of AI and robotics at George Washington University. “And the reason is that if you pose a problem to a robot as simple as going through a door, you will find that the robot will fail at that task much of the time.”

Eric Horvitz, technical fellow and managing director at Microsoft Research, agreed with Drumwright. He said that his overall bias is to see robots as more of a “best friend” to people, mainly because of advances in the automotive industry and the promise of driverless cars to revolutionize and improve the safety of transportation.

Thumbnail
Panelists at the AAAS S&T Policy Forum

 

From left to right panelists Evan Drumwright, James Shields, Robin Murphy, Eric Horvitz, and Miriam John at the AAAS Forum on Science & Technology Policy | Juan David Romero

“Cars today are public-health disasters we rarely think about until one of our family members is gone,” Horvitz said. “I lost my mother to a car accident. My wife’s sister and her whole family were killed in a car accident. So I see this is the way things are going to go, in many areas, especially where it concerns safety and high stakes—a beautiful meshing of human and machine intellect, largely guided by the machine, but also at times by an initiative taken by the human.”

Robin Murphy, director of the Center for Robot-Assisted Search and Rescue (CRASAR) at Texas A&M University, said the semiautonomous nature of AI is precisely why there needs to be a balance between implementing AI technologies too soon, or too late, with the necessary policies and regulations in place.

For example, even though UAVs (unmanned aerial vehicles) exist, regulations concerning what fire departments can purchase are preventing the adoption of UAVs for fire-fighting, Murphy reported. “At this point, none of the Federal Emergency Management Agency urban search and rescue teams use robots. They’re not allowed to buy them … the standards have not been approved,” Murphy added.

On the other hand, she thinks there is a “strong, scary tendency” in robotics, particularly for emergency management and disaster relief, to think that AI technologies are all fully mature and immediately ready for implementation, which is not always the case. The solution, Murphy said, is to make decisions about AI based on data, and that may mean more aggressively acquiring the data both before and during the use of the technology.

Thumbnail
Director of the Center for Robot-Assisted Search and Rescue (CRASAR) at Texas A&M University Robin Murphy discusses the most important policy questions to consider when talking about artificial intelligence and policy | Juan David Romero

At least in the Department of Defense, there is a clear policy for using unmanned systems or autonomous systems, particularly in lethal situations. That should give people comfort with the way these systems are being used, according to James Shields, retired CEO of The Charles Stark Draper Laboratory.

“Autonomous systems, or at least unmanned systems, have had a fairly significant impact on our recent conflicts, from providing persistent surveillance over the battlefield, to clearing caves, to finding improvised explosive devices, all of which have the ability to provide the capability to save lives and keep people out of harm’s way,” Shields said.

In fact, according to Horvitz, the United States is the first nation to have an official policy statement on autonomous weapons systems called Directive 3000.09. Still, he recommends thinking about the long-term implications of any technology.

“As we build rich, smart, and intelligent systems, we’re dealing with systems in the open world that will always be grappling with uncertainty in their ability to see and to act under that uncertainty,” Horvitz said.

In the end, these technologies won’t be a substitution for a person, said Murphy. They’re new and different, which means they will give us great capabilities, but also some new responsibilities, she added.