Swiftly expanding facial recognition technology and its ethical and social implications were explored during a Facebook Live conversation hosted by the American Association for the Advancement of Science’s Scientific Responsibility, Human Rights and Law Program on Sept. 10, the first in a three-part series.
The half-hour long conversation was as fast-paced as it was far-ranging in examining the technology’s current and long-term capabilities, accuracy and ethical and social ramifications, including the responsibilities of scientists and engineers in developing the technologies.
Jessica Wyndham, director of the AAAS Scientific Responsibility, Human Rights and Law Program, guided the discussion that featured P. Jonathan Phillips, an electronic engineer and researcher at the National Institute of Standards and Technologies’ Information Technology Laboratory, and Neema Singh Guliani, a senior legislative counsel at the American Civil Liberties Union who focuses on surveillance, privacy and national security issues.
The next session in the series, which is sponsored by Hitachi, Ltd., a Japanese manufacturer, will focus on intelligent toys on Oct. 8 and a third session will be held on Nov. 12. The first session drew live viewers from as far away as Norway, Germany, California, New York and Florida among other locations. Speakers fielded 20 questions posed by the live audience on a range of subjects that included the threats of newly emerging software, known as Deepfakes, which enables users to edit and alter video content. Following the live event, the audience grew to nearly 3,000 viewers.
Opening the discussion was an exploration of what facial recognition, a branch of artificial intelligence, can and cannot do, an inquiry meant to establish the technology’s capabilities and correct unfounded claims and public misconceptions. In responding, Phillips pointed to the results of a 2018 NIST evaluation examining the capabilities of facial recognition algorithms.
The research tested the ability of the software to match a photograph of one person to a different photograph of the same person within a databank of some 12 million facial images. The test revealed that in 99.7% of cases the original image could accurately be singled out within the 12 million images in the collection. Phillips noted, however, that the match was drawn from near perfect images that lacked variations found in everyday crowds where people tilt their heads, position them at angles or gather in places with varying light sources. “This is still current technology,” he said.
The long-shot promise that facial recognition technology would soon allow a person to be identified from behind a windshield at great distances remains a capability that Phillips said independent tests have not yet found viable. “That’s still pie-in-the-sky,” he explained.
Guliani questioned the advancing pace of the use of facial recognition technology in the absence of more extensive public consideration. Moving the technology forward without first establishing how society can guarantee and protect civil rights and democratic freedoms is an upside-down approach, she said. “There’s a general acknowledgement that the risks and concerns posed by facial recognition are really unique in terms of the fundamental threat to civil liberties,” she added, noting that some use cases can chill First Amendment rights.
Also discussed were accuracy rates of facial technology to complete specific tasks. By way of example, Phillips described a situation to demonstrate how accuracy varies from application to application. Take, for instance, an imaginary bank with one of its customers seeking entrance. In such a case, two types of facial recognition errors occur, either the correct customer is allowed access, a result marked as a successful verification, or an imposter pretending to be a customer is given access, which is a “false accept.”
“The key point for accuracies is: what’s the tradeoff between the two types of errors?” said Phillips. Algorithm developers work to find the most effective way to design “the algorithm to work as best as possible, particularly at a fixed setting,” he added. “The goal is to prove technology can address that question.”
Guliani said research has shown facial recognition to be less accurate when evaluating the faces of women of color, and such technological shortcomings may disproportionally affect women, young people and “other categories for which the technology is not very accurate.”
The impact of these disparities is magnified by the environments in which the technology is being deployed, Guliani said, citing its use in criminal justice settings where arrests and incarceration practices already have been shown to foster racial and ethnic bias and discrimination.
“What you have is really the potential for this imperfect technology on its own to create problems, but then combined with the very imperfect context and in the hands of very imperfect actors, we’re using the technology for disparities to be further exacerbated,” she said.
Technology companies need to increase transparency about the risks their technology presents and determine whether the technologies should even be sold for use in certain purposes. Already, Massachusetts and California have banned the use of facial recognition technology in certain instances, Guliani noted. ”There’s definitely a responsibility and a role for companies and individuals developing the technology to ensure that it’s not used in ways that raise fundamental concerns.”
Guliani added, “While transparency around accuracy and the technology itself is important and crucial, one of the things we can’t ignore is we have to have a larger societal debate about whether we want this technology used at all in certain contexts. And what does that process have to look like in a democratic society?”
[Associated image: Alexander/Adobe Stock]