Skip to main content

Evaluation 101 for Human Rights Organizations: How do we know if we are making a difference?

Evaluation 101 is completed.

Please join us for our new webinar series Innovations in Human Rights Program Evaluation.


This series of webinars was designed to provide foundational information on program evaluation for human rights organizations. Four webinars were scheduled for 2017. Click the links for more information about each episode. 

To be alerted of future webinars, sign up for the Science and Human Rights listserv

Webinar Archive

Evaluation 101: Insights into how change comes about and how to measure it
Click here to download the webinar slideshow, or here to download the presenter's notes.

Evaluation 101: Designing Evaluation Studies
Click here to download the webinar slideshow

Evaluation 101: Participatory Methods to Answer Different Evaluation Questions

Evaluation 101: Scientific Partnerships for Human Rights Program Evaluations


Further Resources

Expert Assistance

AAAS On-call Scientists
This program connects scientists, engineers, and health professionals interested in volunteering their skills and knowledge with human rights organizations that are in need of technical expertise. Over 1,000 volunteers are located around the world. 

Statistics Without Borders
Statistics Without Borders is an outreach group of the American Statistical Association. Over 2,000 members are available to provide pro bono assistance in planning and conducting evaluations. 

Resources from Webinars on Aspects of Program Evaluation

Case Studies of Evaluations

Abdul Latif Jameel Poverty Action Lab (JPAL)
The JPAL is a network of 145 professors from 49 universities aiming to reduce poverty through research, policy outreach, and training. Their website offers extensive information on performance and impact evaluations organized by sector and project. They offer many best practices lessons and how-to guides on a variety of topics, from research design and randomization, measurement and data collection, to working with data.

"Embracing Evaluative Thinking For Better Outcomes: Four NGO Case Studies." InterAction, June 2014.
Compiled by InterAction, this paper illustrates four case studies of evaluations in the field by Catholic Relief Services Ethiopia, CARE Rwanda, Plan International Uganda, and Winrock International Kenya. The goal of the publication is to provide guidance to organizations interested in evaluative thinking at the organizational, program, and project level.

Schlangen, Rhonda. "Monitoring and Evaluation for Human Rights Organizations: Three Case Studies." Center for Evaluation Innovation, January 2014.
This paper explores three case studies of monitoring and evaluation efforts by three human rights non-governmental organizations (NGOs): Amnesty International, The International Commission of Jurists, and Crisis Action. The case studies are intended to support efforts within the human rights community to explore and tackle M&E challenges by providing concrete examples and transferable lessons about how to integrate M&E in useful ways. The cases emphasize both the methodologies used and the organizations’ efforts to build internal M&E capacity and support.

USAID. Technical Note: Evaluative Case Studies.
The authors argue that evaluative case studies can be used as a valuable method or complement to other methods of evaluating development projects and activities. This document provides practical information and guidance for managing development evaluations using case studies. It includes guidelines to identify when an evaluative case study is an appropriate method and when to consider other evaluation methods. The authors also discuss determining the type of case study, sampling techniques, and determining the success and effectiveness of a case study for particular objectives.

Handbooks

Bamberger, M., Rugh, J. & Mabry, L. (2006).  Real World Evaluation: Working Under Budget, Time, Data, and Political Constraints.  Thousand Oaks, CA: Sage Publications, Inc.
Bamberger, Rugh, and Mabry, acknowledge that budget, time, lack of quality data, and conflicting political perspectives, are often real-world constraints on successful evaluations. In their book, the authors provide practical guidance on how to conduct evaluations while working under real-world constraints and challenges.

"Evaluating Human Rights Training Activities: A Handbook for Human Rights Educators." United Nations Office of the High Commissioner for Human Rights & Equitas, 2011.
This handbook provides a synthesis of existing research and practice in educational evaluations in a human rights setting. They provide detailed, step-by-step guidance, including examples of tools and techniques for evaluations that are adaptable to a variety of contexts. In addition, they include a discussion of possible concerns that may arise in evaluations including gender, culture, time, and resources.

"How To: A Local Ownership Approach to Evaluation in Practice." InterAction, May 2015.
In this book, Patton explains how to conduct development evaluations. He explores their usefulness in a variety of contexts including, ongoing program development, adapting principles of practice to local contexts, and facilitating responses in crisis situations. The author frequently uses case studies, stories, cartoons, and summary tables to successfully illustrate key points.

Shahidur, R. and Gayatri, B. (2010). Handbook on Impact Evaluation Quantitative Methods and Practices. Washington DC: The International Bank for Reconstruction and Development/ The World Bank.
This book reviews the various quantitative methods and models of impact evaluations. The authors distinguish impact evaluations from other evaluations such as M&E or operational evaluations. They walk through the experimental design of an impact evaluation, discussing specific strengths and weakness of impact evaluations. The authors also examine various methods including, matching method, and propensity score matching technique, double-difference method, instrumental variable method, regression discontinuity and pipeline method. Finally, they include a brief discussion of the structural approaches to program evaluation including economic models that can lay the groundwork for estimating direct and indirect effects of a program.

Overall Guidance

International Initiative for Impact Evaluation (3IE)
3ie is an international grant-making NGO promoting evidenced-based development policies and programs. Their website provides over 600 impact evaluations, policy briefs, systematic reviews, impact evaluation news, and events. Specifically, they provide resources on using evidence (organized by field), getting funding, and methods to improve evaluations.

"Local Ownership in Evaluation: Moving from Participant Inclusion to Ownership in Evaulation Decision Making." InterAction, February 2015.
The authors argue that current shifts in international assistance necessitate changes in development evaluation. Specifically, this paper provides a rationale for promoting a local ownership approach in evaluation methods and describes the ways in which to promote local ownership in evaluation. Accompanying this paper is a how-to guide with practical steps in using local ownership in evaluation.

World Bank Evaluation Capacity Development

  • Conducting Quality Impact Evaluations Under Budget, Time and Data Constraints
    This paper discusses how cost and time constraints can affect the validity of an evaluation design and the respective conclusions derived from the evaluation. The authors explore what compromises are appropriate and would still allow for valid findings, including the minimum methodological requirements needed for a quality impact evaluation. The document also lists a range of options for simplifying the design of a study to meet cost and time constraints including, selecting comparison groups, using secondary data, strategies for reconstructing baseline data, reducing cost of data collection, and options for addressing other budget, time, and data constraints.

On-call Scientists

  • This AAAS service partners volujnteer scientists and engineers with human rights organizations.  It assists with survey design, cost-benefit analysis, data management and more.  It can help HROs to determine information gathering goals, ways to gather and present quality information, critique reports etc.  It is a service of the Scientific Resonsibility, Human Rights and Law Program of the AAAS.  More informaiton at http://oncallscientists.aaas.org

Measures and Methods

Better Evaluations
BetterEvaluation provides resources about managing evaluations in an effort to improve evaluation practice and theory.  Their website provides resources on a variety of evaluation approaches from case study, to participatory evaluation, to randomized controlled trials and outcome mapping. They also provide short summaries and relevant resources for each step of the evaluation process from how to engage and understand relevant stakeholder and deciding who will conduct the evaluation, to best practices on reporting and using findings.

Theories of Change

Coffman, Julia and Tanya Beer. "The Advocacy Strategy Framework: A tool for articulating an advocacy theory of change." Center for Evaluation Innovation, March 2015.
Coffman and Beer provide a simple, one-page tool for thinking about the theories of change that underlie public policy advocacy strategies. Their article first present the tool then offers six questions that advocates and funders can work through to better articulate their theories of change. While recognizing that advocacy is not predictable or linear, their tool provides advocates with a useful starting point for using theories of change in advocacy work. It also provides an opportunity for advocates to think about their specific audience and prompts thinking about useful tactics and meaningful outcomes.

"The Community Builder's Approach to Theory of Change: A Practical Guide to Theory Development." Aspen Institute, February 2009.
This publication explains how to develop a community building theory of change, which draws the links between early and intermediate steps and long range results.

Stachowiak, Sara. "Pathways for Change: 10 Theories to Inform Advocacy and Policy Change Efforts." Center for Evaluation Innovation & ORS Impact, October 2013.
Stachowiak provides detailed information on the use of theories of change in policy. This paper focuses specifically on those theories most directly applicable to understanding how policy change happens or how specific advocacy tactics play out. Stachowiak also includes a discussion of how evaluators, advocates, and funders can apply these theories to advocacy and policy work.

Study Designs

Design, Monitoring and Evaluation for Peacebuilding
DM&E for Peace is a global community of practitioners, evaluators, and academics that share best and emerging practices on how to design, monitor, and evaluate peacebuilding programs. They provide a variety of how-to resources (arranged by project theme) and blog style posts on related design, monitoring, and evaluation topics, and discussions of key issues from the field.

Morra Imas, L.G. & Rist, R.C. (2009). The Road to Results: Designing and Conducting Effective Development Evaluations. Washington DC: The International Bank for Reconstruction and Development / The World Bank.
Imas and Rist explain how an advancing development agenda necessitates responding shifts in development evaluation. Specifically, the authors argue that evaluation models should move away from output-focused to more results-based evaluation models, specifically in the context of the new millennium development goals. The authors argue that successful evaluations often must include assessments of results at the country, sector, theme, policy, and even global level. This book explores the ways in which development evaluations require a coordinated approach that emphasizes partnerships and joint evaluations. The authors analyze the complexities and challenges that arise in the face of joint evaluations and provide information on building development evaluation capacity and conducting evaluations that focus on results.

Shadish, William and Thomas Cook. "The Renaissance of Field Experimentation in Evaluating Interventions." Annual Review of Psychology, July 2008.
This article reviews the history and use of field experiments, particularly those used in four kinds of experimental and quasi-experimental design: randomized experiments, regression discontinuity designs, short interrupted time series, and nonrandomized experiments. The authors address difficulties that arise from conditions of field implementation, including the estimation of treatment effects under partial treatment implementation, the prevention and analysis of attrition, analysis of nested designs, new analytic developments for both regression discontinuity designs and short interrupted time series, and propensity score analysis. 

Shadish, William, Thomas Cook and Donald Campbell. "Experimental and Quasi-Experimental Designs for Generalized Causal Inference." Wadsworth Cengage Learning, 2002.
This book covers four major topics in field experimentation. First the authors discuss key theoretical concepts of experimentation, causation, and validity. The authors then address quasi-experimental designs, specifically addressing regression discontinuity designs, interrupted time series designs, designs that use pretest and control groups, among others. The authors also explore issues that arise with various randomized experiment designs, including flaws in logic and design, ethics of RCTs, and common concerns in recruitment, assignment, treatment, implementation, and attrition. The authors conclude with a discussion of grounded theory of causal inference and the methods for implementing that theory in a variety of studies.

Organizations on Evaluation

International Development Evaluation Association (IDEAS)
IDEAS is a global professional evaluation association, promoting evaluation and fostering capacity development and the improvement and advancement of theories, methods, and use of evidence.

International Organization for Cooperation in Evaluation (IOCE)
The IOCE represents international, national, and regional Voluntary Organizations for Professional Evaluation (VOPEs) worldwide. It strengthens international evaluation through the exchange of evaluation methods and promotes good governance and recognition of the value evaluation has in improving people’s lives. Their website provides lists of the evaluation associations in many countries, through which individual national evaluators can be contacted.

M&E News UK
A news service focusing on developments in monitoring and evaluation relevant to development programs with social development objectives.

Compiled and edited September 2017.

 


The series is a project of the AAAS Science and Human Rights Coalition. Team organizer is Oliver Moles, Ph.D. 

Related Focus Areas

Related Scientific Disciplines