Skip to main content

A Multiperspective Look at Artificial Intelligence for Human Rights Causes

Author:

John Dale, AAAS Science & Human Rights Coalition, Steering Committee

 

Notes:

  • Presenters
    • Moderator: Samir Goswami, Chief Operating Officer at the Partnership on Artificial Intelligence.
    • Nadya Bliss, Executive Director of Global Security Initiative at Arizona State University.
      • With a PhD in Computer Science, Nadya also holds a professor of practice appointment and as a member of the graduate faculty in the School of Computing, Informatics, and Decision Systems Engineering, and a senior sustainability scientist appointment in the Julie Ann Wrigley Global Institute of Sustainability. She has worked for 20 years on developing research initiatives in security contexts, including national security contexts. Her current work also includes efforts to combat human trafficking and slavery.
    • Chloe Autio, Policy Analyst and Manager of Artificial Intelligence & Privacy Policy at Intel Corporation
      • Chloe develops leads from a policy and technology standpoint.
    • Jennifer Ding, Solutions Engineer at Numina
      • Jennifer extracts insight from big and small datasets and machine learning techniques to improve quality of life in cities, streets, and public space. She emphasizes that how technology is built matters, and is interested in “intelligence by design” (and without surveillance).
    • Shabnam Mojtahedi, Senior Program Manager at Benetech
      • Shabnam is also a lawyer focused on rule of law and human rights in the Middle East and North Africa region. At Benetech, she is leading efforts to apply artificial intelligence to help civil society organizations pursue justice and accountability in Syria and beyond. Before coming to Benetech, she worked with the Syria Justice and Accountability Centre (SJAC), an organization that promotes meaningful transitional justice for Syrian victims of war crimes and human rights violations. Her current work at Benetech (developing software solutions) focuses on working with partners to examine needs, test tools (e.g., AI in the Syrian context) and incorporate their feedback accordingly. She is working in particular to define how we measure the impact of this technology in the human rights space – not just how to extract info from conflict zones, but how to analyze and assess everything that has been collected and documented. The objective is to set standards for the responsible development of software for good.
  • Panel’s Theme
    • Opportunities and risks presented by operationalizing artificial intelligence (AI) – and particularly how to utilize AI for human rights causes.
  • The loss or exploitation of personal data privacy
    • The first risk presented by operationalizing AI was the loss or exploitation of personal data privacy. Samir began the discussion with a question for Jennifer, “How do you bake in [to your design] intelligence without surveillance?”
    • Jennifer explained, “Our sensor approach is to use the camera as a sensor (with a processing unit) – anonymous movement data -- and we don’t need the liability risk of collecting everything. In other words, ‘smart collection.’”
    • The question was posed, can this be commercially developed? Are there market benefits?”
    • Jennifer: “Yes – in fact it’s potentially a more scalable and affordable approach for us.”  She explained that certain kinds of algorithms (like San Francisco’s ban on facial recognition software) will benefit Numina as partners.  The implication is that smart urban policy and technology designed to solve social and human rights problems can also make good business sense.
    • Prompted by the rapid rise of new technologies like AI, Intel Corporation released last year (November 2018) model legislation designed to inform policymakers and spur discussion on personal data privacy. Samir asked Chloe about how Intel deals with privacy concerns.
    • Chloe: “Privacy is something we value at Intel. It is a fundamental human right, and the United States needs a comprehensive, federal human privacy law. And I’m excited that DC policymakers are finally interested in this.”
  • Focusing on vulnerabilities, not just developing capabilities
    • Nadya expressed her appreciation for the AAAS policy at this conference – namely the policy of opting in (rather than opting out) in terms of permitting photos of conference participants.
    • She explained that the Global Security Initiative, which she directs, has been working with United Nations University-Centre for Policy Research (UNU-CPR); Rights Lab; and others on project Delta 8.7. 
    • The UNU-CPR created Delta 8.7—an innovative project that helps policy actors understand and use data responsibly to inform policies that contribute to achieving Target 8.7 of the United Nations Sustainable Development Goals (SDGs), a commitment to take effective measures to eradicate modern slavery, human trafficking, forced labor and child labor. Delta 8.7 brings together the most useful data, evidence, research and news, analyzes cutting-edge data, and helps people understand this data so that it can be translated it into effective policy.
    • The Global Security Initiative also co-organized (with Delta 8.7, The Alan Turing Institute, the Computing Community Consortium, Tech Against Trafficking, and the Rights Lab) the Code 8.7 Conference in February 2019, bringing together for the first time the artificial intelligence, machine learning, computational science and anti-slavery communities – and to discuss how these technologies could be used to help in the effort to eradicate forced labor, modern slavery, human trafficking and child labor in accordance with Target 8.7 of the Sustainable Development Goals.
    • Code 8.7 examined the value of machine learning to the anti-slavery community, how best to combine Big Data and Small Data, the possibilities of information and communications technology (ICT) for survivor self-identification and the roles of satellite remote sensing, crowd-computing and open digital maps to better visualize slavery locations.
    • Conversations also emerged around the biases found in data, the need to understand modern slavery prevalence, how to use financial data to identify trafficking and the role of survivors as subjects and researchers.
    • Nadya reported that she and her partners are currently focusing on existing US databases and predictive decision-making environments, but suggested that this also comes with risks.
      • “We are finally focusing on vulnerabilities, and not just developing capabilities.  Tech has a significant diversity issue, and this allows us to address it in a different way.  We also have partnerships with law enforcement.” 
      • Nadya elaborated on this theme later in the discussion as well, claiming, “We have gotten into a state where we are capability-centric – not vulnerability-centric.”
    • She explained that the type of domain expertise is critical in all stages of this kind of development (even early stage technology development). 
      • “My hope, she said, “is that we can do this with AI in ways that we missed with the development of the Internet...which we seem to have broken – [in] my opinion.”
  • Corporate partners in ethical and sustainable problem-solving
    • Shabnam explained that Benetech has been supporting human rights defenders around the world.
      • “More recently, to do so securely, we are asking how we make connections within this data, and to increase collaboration with partners in new ways.  We are working on machine learning and computer vision – creating fingerprints for how closely related these are...Trying to overcome these technical challenges – AI in the human rights space – is tricky because of low tech resources and affordability issues.  You want to help them develop solutions that are sustainable for them to use and appropriate for their workflows and risks.”
    • Shabnam suggested that where Benetech can have the most impact while minimizing risks is “...by focusing on human rights investigators who are interested in facial recognition programs – deployed in an ethically responsible manner – and we want to help them do that.”
      • Nadya agreed with Shabnam, reiterating that “there is no reason for computer scientists and firms to parachute in and entangle themselves in ethically questionable projects, but we can use mature tech to help contribute to problem solving that is ethical and sustainable.”
  • Diversity and inclusion
    • The discussion segued more broadly to issues of corporate social responsibility among tech firms. 
    • Chloe addressed diversity and inclusion, as well as transparency in data. Citing an annual CSR [corporate social responsibility] report that Intel publishes, she said that “our work on diversity and inclusion internally is something I am proud of. ...Diversity helps us build better products as well. Intel is working with ethnographers, anthropologists and social scientists and it has been productive and helpful in this regard.” 
  • Transparency of data and partnerships
    • Chloe also said that transparency in terms of data and what we can share with our employees (e.g., on pay-outs and comparative salary for women and minorities) is an important way that Intel is trying to shape the nexus between the development of artificial intelligence, business, and human rights.
    • Jennifer spoke to transparency in the context of Numina’s recent negotiations over their partnerships with cities and what kind of information or data they will and will not take or collect. She acknowledged that this is an important decision, and says that Numina is upfront and transparent about it.  “Numina thinks the privacy for people at large is as important as it is for consumers in particular.”
    • In response to a question on if Benetech engages in principled self-regulation, Shabnam replied, “I love to say no!”
      • “Because we are working in the human rights space and working with open source, we are asking about how to mitigate challenges to human rights all the time -- especially around personally identifying information.”
  • Data responsibility and informed consent
    • Shabnam also identified data responsibility as an important concern –  that is, informed consent when engaging with victims – (not so much with digital data). Videoing violence and atrocities that is then posted (without the consent of victims) is arguably being used for good intentions, but who really knows that it is being used in this way? Furthermore, no opt-out policy makes sense in this context.
    • Finally, Shabnam mentioned that the time and resources (pro bono or not) of their partners are being used when we promote certain solutions – and we should be respectful of how we direct and advise them. 
  • Question and Answers
    • Data ownership – how it is being determined?
      • Chloe responded, “There is no legal right to privacy in the United States. It’s a tough question, but we need to figure out a federal privacy law.”
      • Jennifer said, “Data ownership (and usage, as well) is very important.  The data that we collect should be available to the people we collect it from.”  She reported that Numina just created a sandbox that people can use. But, she warned, damage can be done even if there is co-ownership.  Also, ownership transfers are an issue that we need to consider.
      • Shabnam claimed that Benetech is trying to cluster similar content (e.g., preserve and archive 800,000 videos, somehow with missing metadata). 
        • Manipulation of videos is a serious problem, but by preserving as much of the data and metadata as we can, it allows us to work on the possibility of verifying and authenticating what is and is not real.
      • Nadya explained that, in light of human trafficking research data, no databases are even being shared across cities in the Southwest.  So there is a lot of room to deploy useful solutions.
    • How do you make the city governments interested in owning and using this data?
      • Jennifer: Transportation planners are the people we mostly work with at Numina. We ask them how can we automate what they want to do – like achieving zero traffic. Also, we often initiate the privacy concerns to our city partners. But we don’t work with law enforcement, which would raise different issues.
      • Shabnam:  Digital literacy. Shabnam discussed how there has been increased recognition lately in the United States about how data is being used and why we should be concerned.
      • Nadya: Mentioned the use of organized disinformation campaigns report from Oxford University that just came out (Samantha Bradshaw & Philip N. Howard, “The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation.”) which describes the deliberate manipulation of information by countries that is occurring.
    • Can AI eliminate bias in recruitment?
      • Chloe: “No! I’m going with low confidence that it can at this point... more research and caution is needed.”
      • Nadya: “No! AI is biased. Algorithms are biased because they are approximating things that are trained on data sets that have been created based on often biased samples (e.g., white and Asian men). But, she also emphasizes that algorithm research is underway, and that she remains hopeful.
    • Smart cities?
      • Jennifer: Beta blocks is a good project. Nevertheless, we must always ask, “smart for whom?”
    • Supply chains and AI tech – human rights implications?
      • Nadya: NSF initiatives on illicit supply chain detection are underway.
      • Chloe: “I don’t work with Intel supply chains much, but it is a concern – e.g., no conflict minerals for chips. But I think it’s first and foremost a people and a policy issue, not an AI tech issue.” She suggested that putting AI tech first would be putting the cart before the horse.
      • Samir: Cybersource – tagging images that 500 companies use – AI jobs are they good jobs in the supply chain?  Of course, the AI data itself could be used to help make such jobs better, but also it could be used to surveil organized labor on the job site.
    • Is diversity of scientific perspective and experience important in the development of new tech?
      • Chloe: our business has changed – we make chips (hardware), but as AI has become a big part of this, working up the stack brings us to confront new contexts and to consider deployment differently – in ways that social scientists have been useful.
        • For example: Mobile Eye (I) is one our new clients. One of our social scientists was collecting qualitative data on what it means to have these kinds of technologies deployed – data you can’t collect in a lab or controlled environment. “Intel labs” are studying tech reception in a live environment.
    • What are you most optimistic about?
      • Nadya: “That we are having this conversation. And Jennifer gives me hope reporting that there is market-based incentive to be ethical.”
      • Chloe: “I’m proud to see women leading this emerging space.”
      • Jennifer: “Human rights organizations won’t be the first people using AI – why not?  For example, using blockchain for identity in drone data collection. There is an appetite in tech to work on these kinds of problems. Maybe human rights can and should be leading the way in raising the issues”.
      • Shabnam: “I’m more cynical. I would like to see greater digital literacy and conversations like these across more of the country.”

 

Key Points/Takeaways:

  • Divergence of perspectives reflects some interesting tensions
    • We are told we need more expansive “digital literacy.” Yet, we are also told that organized state-sponsored disinformation campaigns are on the rise.
    • How knowledgeable really do we want our citizenry to be?
    • If human rights organizations can lead the way in raising issues concerning the development of AI, as Jennifer suggests, how might the Coalition, as a multi-disciplinary network of scientific associations, contribute to resolving this tension – or advancing other issues that emerged in this session?
  • Some issues mentioned include the following:
    • Using blockchain for identity in drone data collection.
    • Developing research to collect qualitative data on users’ (and possibly including citizen scientists’) experience with AI tech in a live environment – including public space.
    • Collecting data (e.g., using agent-based modelling) that could be used to supplement AI data collection of workers employed in AI jobs on the supply chain of AI tech firms.
    • NSF initiatives on illicit supply chain detection ae underway, and this could be an opportunity for multi-disciplinary collaborative research proposals with a deliberate AI/human rights angle – and an opportunity for organizing future conference panels across scientific professional associations.
    • Working with municipal governments and universities to develop database sharing for the development and deployment of useful solutions.

 

Tags:

  • AI
  • Technology
  • United Nations
  • Drones
  • Policy
  • Data Privacy

 

Additional Resources: