General Methods and Resources

Evaluators often face challenging decisions about what methods and designs are most suitable for assessing whether an intervention “works.”  While by no means exhaustive, the following list provides links to examples of general methods and resources TAACCCT third-party evaluators may find helpful. Neither the U.S. Department of Labor nor Abt Associates endorses these materials or their authors. Some materials are publicly available and hyperlinks have been provided. Others must be purchased either directly from the publisher or from a bookseller, or can be downloaded via journal access databases.

Disclaimer: Neither Abt Associates, nor its TAACCCT evaluation partners, is responsible for the contents of any “off-site” web page referenced from this server or from private, third-party, pop-up, or browser-integrated software or applications. You are subject to that site’s privacy policy when you leave the TAACCCT Evaluation website. We are not responsible for Section 508 compliance (accessibility) on other websites.

 

General Methods and Resources

Document Title/Link Description Author/Institute
Can Nonexperimental Comparison Group Methods Match the Findings from a Random Assignment Evaluation of Mandatory Welfare-to-Work Programs? This document addresses two questions: (1) which nonexperimental comparison group methods provide the most accurate estimates of the impacts of mandatory welfare to work programs; and (2) do the best methods work well enough to substitute for random assignment experiments? Howard S. Bloom, Charles Michalopoulos, Carolyn J. Hill, and Ying Lei
CLEAR: Policies and Procedures The CLEAR Policies and Procedures document provides details on all aspects of CLEAR operations. U.S. Department of Labor's Clearinghouse for Labor Evaluation and Research
CLEAR: Causal Evidence Guidelines This document provides the CLEAR evidence criteria for rating causal studies. U.S. Department of Labor's Clearinghouse for Labor Evaluation and Research
Common Guidelines for Education Research and Development These guidelines seek to provide a broad framework that clarifies research types and provides basic guidance about the purpose, justification, design features, and expected outcomes from various research types. Institute of Education Sciences, US Department of Education and the National Science Foundation
Evaluating Program Evaluations: New Evidence on Commonly Used Nonexperimental Methods This article uses two conventional nonexperiemental strategies for estimating the effects of social programs and compares the results to those of experimental methods. Daniel Friedlander and Philip K. Robins
Evaluation: A Systematic Approach This book includes techniques and approaches to evaluation as well as guidelines to tailor evaluations to fit programs and social contexts. Peter Rossi, Mark Lipsey, and Howard Freeman
Handbook on Impact Evaluation: Quantitative Methods and Practices This book reviews quantitative methods and models of impact evaluation as well as monitoring and evaluation operational evaluation, and mixed-methods approaches combining quantitative and qualitative analyses. Shahidur Khandker, Gayatri Koolwal, and Hussain Samad
Handbook of Practical Program Evaluation This book presents a variety of approaches to evaluation through brief articles by academics and practitioners. Kathryn Newcomer, Harry Hatry, and Joseph Wholey
How Close Is Close Enough? Testing Nonexperimental Estimates of Impact against Experimental Estimates of Impact with Education Test Scores as Outcomes This discussion paper uses PSM and compares to results from an experimental design in educational setting. Elizabeth Ty Wilde and Robinson Hollister
How Do We Know if a Program Made a Difference? This manual presents statistical and econometric methods for program impact evaluation and causal modelling. Peter M. Lance, David K. Guilkey, Aiko Hattori, and Gustavo Angeles
Improving the Evaluation of DOL/ETA Pilot and Demonstration Projects: A Guide for Practitioners This 2001 paper details ways to improve DOL/ETA demonstration and pilot evaluations to maximize what is learned, while minimizing costs and research burden. Stephen Bell
Understanding Variation in Treatment Effects in Education Impact Evaluations: An Overview of Quantitative Methods This report summarizes quantitative methods for examining variation in treatment effects across students, educators, and sites in education evaluations. Peter Z. Schochet Mike Puma, and John Deke
Using State Administrative Data to Measure Program Performance This study uses state administrative data to examine the sensitivity of earnings impact estimates for a job training program based on alternative nonexperimental methods (including propensity score matching, Mahalanobis distance matching, and regression adjustment). Peter R. Mueser, Kenneth R. Troske, and Alexey Gorislavsky
What Works Clearinghouse Procedures and Standards Handbook – Version 3.0 This document outlines the basic steps that the WWC uses to develop a review protocol, identify the relevant literature, assess research quality, and summarize evidence of effectiveness. U.S. Department of Education’s National Center for Education Evaluation and Regional Assistance
Which Study Designs Are Capable of Producing Valid Evidence About A Program’s Effectiveness? The guide provides a brief overview of the types of studies that produce valid evidence about a program’s effectiveness. Coalition for Evidenced-Based Policy