Skip to main content

Responsible AI

A Facebook Live Series. Sponsored by Hitachi.

Artificial intelligence technologies are rapidly advancing and are becoming increasingly pervasive in many facets of everyday life. Technologies, such as facial recognition, machine learning, and natural language processing are all growing in their applications for private and public use, and as their utility increases, so do questions about their social implications.

The AAAS Responsible AI Series, with support from Hitachi, aims to explore artificial intelligence technologies, their current capabilities, their ethical and policy implications, and the responsibilities of the scientists and engineers developing the technologies. Join us as we interview leading AI experts to debunk the myths and set the record straight about where development of these technologies is and where it is going.

You do not need a Facebook account to view this series.

Past Episodes

Episode 3: Opening the Medical "Black-Box"

January 23, 2020

Artificial intelligence (AI) promises to revolutionize the future of health care. New technologies have the potential to assist in the detection of strokes and prediction of seizures, the identification of individuals who are at higher risk of domestic violence, and even with the development of patient treatment plans. AI medical technologies may even help doctors provide greater access to care to remote individuals and communities. Parallel to this utopian vision of AI-driven health care is a dystopian future where health data collected through wearable devices are sold to third parties and used against patients by insurers, employers or even banks. As AI continues its integration into the offices and operating theatres of medical professions, what are the ethical and human rights issues that arise?

Join us for a dialogue between two experts on AI and its impact on health care. After outlining the current capabilities of AI in health care and its potential medium and long-term capabilities, we will explore the “good” and the “not-so-good” aspects of AI when applied to health care. This event will address the benefits that might arise from applying machine learning to health data particularly in the context of more accurate diagnostics. We will also talk about the issues that are coming from this “black-box” approach to medicine and what consumers and patients should know about how their data is being shared, as well as the steps physicians, hospitals, and companies should take to protect patients’ privacy. Further, we will discuss the future of legislation and regulation and about the legal responsibilities when AI-based systems are proven wrong. This discussion will be followed by a Q&A session open to audience members.

Participants

W. Nicholson Price is a law professor at University of Michigan. He has written extensively about health law and regulations but also published papers on the impact of AI on the medical system, intellectual property, and regulation. Professor Price received a JD and a PhD in biological sciences from Columbia University and an AB in biological sciences from Harvard College.

Hsiu-Khuern Tang is a Principal Research Scientist at Hitachi America, Ltd., where he is applying machine learning to solve business problems in different industries.  In healthcare, he has created analytics applications that help hospitals make better decisions using their data.  His current interests include improving the explainability of clinical machine learning models.

Ilana Harrus (Moderator) is the Senior Program Associate for the fledging AAAS AI/Applications-Implications initiative. An astrophysicist by training, her concerns about privacy in the era of AI and big data prompted her to change field after more than 15 years working at NASA. She has a PhD in Physics from Columbia University and a Master’s in Information Systems from University of Maryland, Baltimore County. She is also a PMI-certified Project Manager (PMP). 

Additional Resources

  1. "Talking points" summary of the discussion

Articles from the Science Family of Journals

  1. Algorithms on regulatory lockdown in medicine (Babic et al., 2019)
  2. Artificial intelligence for global health (Hosny and Aerts, 2019)
  3. AI in resource-poor health care systems (Alderton, 2019)
  4. Medicine contends with how to use artificial intelligence (Couzin-Frankel, 2019)
  5. Adversarial attacks on medical machine learning (Finlayson et al., 2019)
  6. Regulation of predictive analytics in medicine (Parikh et al., 2019)
  7. Artificial intelligence could diagnose rare disorders using just a photo of a face (Schembri, 2019)
  8. Big data and black-box medical algorithms (Price, 2018)
  9. Health and societal implications of medical and technological advances (Dzau and Balatbat, 2018)
  10. Turning skin “check” into checkmate (Lev-Tov, 2017)

Episode 2: Intelligent Toys

October 8, 2019

Children interact with toys designed with artificial intelligence-based technologies and are doing so in increasingly nuanced ways. Intelligent toys and other smart robots for children can deliver educational content, inspire emotional bonds, and even help children with autism build social skills. However, these devices also raise ethical, legal, and human rights concerns.

Join us for an interview with leading experts on intelligent toys and other smart tools for children. They explore the current capabilities of these devices and their potential medium and long-term capabilities. Learn about the ethical, legal and social implications of these technologies and consider how these concerns should inform developers, users and regulators. This interview will be followed by a Q&A session with audience members.

Participants

Kerstin Dautenhahn is Professor and Canada 150 Research Chair in Intelligent Robotics at the University of Waterloo in Ontario, Canada, and director of the Social and Intelligent Robotics Research Laboratory. Her research focuses on human-robot interaction, social robotics, and assistive technology. Before going to the University of Waterloo, she led the development of the KASPAR robot, designed as a social companion for children with autism. She is an IEEE Fellow and the author of more than 300 peer-reviewed articles.

Alexa Koenig is Executive Director of the Human Rights Center at the University of California, Berkeley School of Law. She teaches classes on human rights and international criminal law with a particular focus on the impact of emerging technologies on human rights practice. In 2018-19, she supervised a team of students at the Human Rights Center who produced a memorandum on artificial intelligence and human rights for UNICEF.

Jessica Wyndham (Moderator) is the Director of the Scientific Responsibility, Human Rights and Law Program. She also serves as coordinator of the AAAS Science and Human Rights Coalition, a network of scientific, engineering, and health associations that recognize the role of science and technology in human rights. Her areas of expertise include the intersections of science, technology, human rights and ethics, the social responsibilities of scientists and engineers, and the role of professional scientific, engineering and health societies in the promotion and protection of human rights.

Additional Resources

  1. Executive Summary: Artificial Intelligence and Children's Rights (UNICEF Innovation and Human Rights Center, UC Berkeley, 2019)

Articles from the Science Family of Journals

  1. Improving social skills in children with ASD using a long-term, in-home social robot (Scassellati et al., 2018)
  2. Personalized machine learning for robot perception of affect and engagement in autism therapy (Rudovic et al., 2018)
  3. How artificial intelligence lets Barbie talk to children (DeMarco, 2015)
  4. Minds of their own (Service, 2014)

Episode 1: Facial Recognition and Scientific Responsibility

September 10, 2019

Facial recognition is one type of artificial intelligence that is becoming ever more pervasive in our society. It can make our lives easier by accomplishing various tasks such as unlocking smartphones with just a glance, and automatically tagging our friends and family in photos on social media. However, many ethical, legal and human rights concerns exist about facial recognition, from inaccuracies in the technology to its application as a means of general surveillance. Given this, what are the responsibilities of developers and users to ensure facial recognition is transparently, ethically, and justly developed and applied?

Join us for an interview with two leading experts on facial recognition technology who will explore the current capabilities of facial recognition, debunk the myths and explain the realities of its current degree of accuracy, and explore the potential medium and long-term capabilities of the technology. Learn about current efforts to address the ethical, legal and social implications of the technology and consider how these concerns should inform developers and users of the technology.

Participants

Neema Singh Guliani is a senior legislative counsel with the American Civil Liberties Union (ACLU) Washington Legislative Office, focusing on surveillance, privacy, and national security issues. Prior to joining the ACLU, she worked in the Chief of Staff’s Office at the U.S. Department of Homeland Security, concentrating on national security and civil rights issues.

P. Jonathon Phillips is an Electronic Engineer at the National Institute of Standards and Technology's Information Technology Laboratory (NIST). One of the foremost experts on facial recognition, he has published more than 100 peer reviewed papers on face recognition, computer vision, biometrics, psychology, forensics, statistics, and neuroscience. He is an IEEE Fellow and an International Association of Pattern Recognition (IAPR) Fellow.

Jessica Wyndham (Moderator) is the Director of the Scientific Responsibility, Human Rights and Law Program. She also serves as coordinator of the AAAS Science and Human Rights Coalition, a network of scientific, engineering, and health associations that recognize the role of science and technology in human rights. Her areas of expertise include the intersections of science, technology, human rights and ethics, the social responsibilities of scientists and engineers, and the role of professional scientific, engineering and health societies in the promotion and protection of human rights.

News Article

AAAS Event Examines Readiness and Impact of Facial Recognition Technology on Society (AAAS News)


This series is supported by