Addressing Pain Points in Qualitative Research. With Luis A. Camacho and Kate Marple-Cantrell. USAID 2024.

Many evaluations and assessments involve a team of researchers conducting a series of qualitative interviews and focus group discussions at one point in time. Yet such studies regularly produce several complaints from commissioners, evaluators, and implementers related to the accuracy and reliability of the findings — and the subsequent usefulness of the study. This study lays out guidance on addressing seven pain points in qualitative evaluation work, including: 1) case and site selection for small-n studies; 2) selection of respondents; 3) social desirability bias; 4) qualitative data capture; 5) qualitative data analysis; 6) evidentiary support for statements; and 7) clarity of findings to facilitate use. This guidance includes minimum standards to which all studies should adhere, good practices that studies should seek to implement when feasible, and guidance for study commissioners. (Full report; Brief for study commissioners)
DRG Mission Use of Evidence (MUSE): Lessons from Evidence Utilization in USAID DRG Program Design. With Robert Gerstein and Aleta Starosta. USAID. 2022.

This study explores how the United States Agency for International Development (USAID) plans for and incorporates research evidence and evaluation into its democracy, human rights, and governance (DRG) programs. As such, the study focuses on both the use and the generation of evidence. On the use side, the goal of this study is to understand the extent to which research evidence is used to inform activity designs, identify obstacles to greater research evidence use, and draw recommendations and conclusions to better integrate research evidence into activity design. On the generation side, the study seeks to understand how activity design teams plan research, evaluation, and learning and ways in which USAID could improve evidence and evaluation planning. This research was used to inform the DRG Center’s and later Bureau’s strategy to promote the use of evidence and to improve evidence and evaluation planning. (Full report; Brief for activity designers)
DRG Impact Evaluation Retrospective: Learning from Three Generations of Impact Evaluations. With Michael G. Findley and Aleta Starosta. USAID. 2021

In response to an influential 2008 National Academies of Science report, USAID’s DRG Center initiated a pilot program of impact evaluations (IEs). This retrospective looks across the 29 randomized control trial IEs conducted since that report and identifies both the accomplishments and the challenges of conducting rigorous evaluation work. The study derived lessons learned and evidence-based recommendation that guided subsequent DRG impact evaluations. (Full report; Brief)
Other Evidence and Learning Team Initiatives
Evidence and Learning Knowledge Management Report (2025): Prior to its dissolution in 2025, the DRG Bureau’s Evidence and Learning (E&L) team worked to (a) build evidence and (b) increase the use of evidence in decision-making. The tools developed by the team are obviously no longer used by USAID; however, they can be built upon both others seeking to promote evidence-informed decision-making. This lengthy document attempts to capture the key initiatives, guidance, methods, and documents of the E&L team’s work for others (Full report).
Tools for asking better evaluation questions (2024): A set of clear, realistic evaluation questions clearly linked to the evaluation purpose can result in detailed findings and actionable recommendations that improve program outcomes. Inversely, questions that are unwieldy, unrealistic, or difficult to understand can make a mess of even the best methodology. Aleta Starosta and Andrew Green identified three key elements—feasibility, scope, and clarity—where performance evaluation questions commonly fall short. (Three keys brief; Question development workbook, Glossary of DRG evaluation terms, Full report)
Collaborating, learning, and adapting informed by evidence (CLAIRE) (2025): USAID employed a methodology termed Collaborating, Learning, and Adapting (CLA) to avoid programmatic inflexibility and increase program effectiveness. This research conducted by Michael Cowan and Noelle Wyman Roth explored to what extent CLA processes used structured evidence, including data, research, and evaluations to inform their adaptations and how structured evidence could better be incorporated into CLA processes (Full report).
Ensuring research findings are put to use: A well designed and executed study with clear policy implications is no guarantee that study findings will be used by practitioners. An initiative that eventually came to be known as Advancing Utilization and Dissemination in Research and Analysis (AUDRA) tracked and promoted research utilization across a total of 70 research projects commissioned by USAID’s DRG Bureau. (Initial 2022 report by Miriam Counterman, Simon Conte, Aleta Starosta, and Kate Marple-Cantrell).
Post-USAID efforts to keep learning
Preserving knowledge: Many essential DRG resources have been preserved at the DRG Hub.
EIRP community: Some of the DRG monitoring, evaluation, and learning community has shifted over to a new initiative: E.I. Research Partners.
This Week in Social Science: Chris Grady and Levi Adelman produce a weekly substack summarizing important social science research. Sign up for TWISS here.