Learning about social programs from experiments with random assignment of treatments
AbstractThe importance of social programs to a diverse population creates a legitimate concern that the findings of evaluations be widely credible. The weaker are the assumptions imposed, the more widely credible are the findings. The classical argument for random assignment of treatments is viewed by many as enabling evaluation under weak assumptions, and has generated much interest in the conduct of experiments. But the classical argument does impose assumptions, and there often is good reason to doubt their realism. Some researchers, finding the classical assumptions implausible, impose other assumptions strong enough to identify treatment effects of interest. In contrast, the recent literature examined in this article explores the inferences that may be drawn from experimental data under assumptions weak enough to yield widely credible findings. This literature has two branches. One seeks out notions of treatment effect that are identified when the experimental data are combined with weak assumptions. The canonical finding is that the average treatment effect within some context-specific subpopulation is identified. The other branch specifies a population of a priori interest and seeks to learn about treatment effects in this population. Here the canonical finding is a bound on average treatment effects. The various approaches to the analysis of experiments are complementary from a mathematical perspective, but in tension as guides to evaluation practice. The reader of an evaluation reporting that some social program "works" or has "positive impact" should be careful to ascertain what treatment effect has been estimated and under what assumptions.
Download InfoIf you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
Bibliographic InfoPaper provided by University of Wisconsin Institute for Research on Poverty in its series Institute for Research on Poverty Discussion Papers with number 1061-95.
Date of creation:
Date of revision:
Contact details of provider:
Postal: 3412 Social Science Building, 1180 Observatory Drive, Madison, WI 53706
Phone: (608) 262-6358
Fax: (608) 265-3119
Web page: http://www.ssc.wisc.edu/irp/dp/dplist.htm
More information through EDIRC
Other versions of this item:
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Charles F. Manski, 1993.
"The Mixing Problem in Program Evaluation,"
NBER Technical Working Papers
0148, National Bureau of Economic Research, Inc.
- Manski, C.F., 1990. "The Selection Problem," Working papers 90-12, Wisconsin Madison - Social Systems.
- Heckman, James J. & Robb, Richard Jr., 1985. "Alternative methods for evaluating the impact of interventions : An overview," Journal of Econometrics, Elsevier, vol. 30(1-2), pages 239-267.
- Nancy Clements & James Heckman & Jeffrey Smith, 1994. "Making the Most Out Of Social Experiments: Reducing the Intrinsic Uncertainty in Evidence from Randomized Trials with an Application to the JTPA Exp," NBER Technical Working Papers 0149, National Bureau of Economic Research, Inc.
- repec:att:wimass:9217 is not listed on IDEAS
- Aaron, Henry J, 1989. "Politics and the Professors Revisited," American Economic Review, American Economic Association, vol. 79(2), pages 1-15, May.
- James J. Heckman, 1991. "Randomization and Social Policy Evaluation," NBER Technical Working Papers 0107, National Bureau of Economic Research, Inc.
- Jeffrey E. Harris, 1985. "Macroexperiments versus Microexperiments for Health Policy," NBER Chapters, in: Social Experimentation, pages 145-186 National Bureau of Economic Research, Inc.
- Frank Stafford, 1985. "Income-Maintenance Policy and Work Effort: Learning from Experiments and Labor-Market Studies," NBER Chapters, in: Social Experimentation, pages 95-144 National Bureau of Economic Research, Inc.
- Steven Levitt & John List, 2009.
"Field experiments in economics: The past, the present, and the future,"
Artefactual Field Experiments
00079, The Field Experiments Website.
- Levitt, Steven D. & List, John A., 2009. "Field experiments in economics: The past, the present, and the future," European Economic Review, Elsevier, vol. 53(1), pages 1-18, January.
- Steven D. Levitt & John A. List, 2008. "Field Experiments in Economics: The Past, The Present, and The Future," NBER Working Papers 14356, National Bureau of Economic Research, Inc.
- Meyer, Bruce D, 1995.
"Natural and Quasi-experiments in Economics,"
Journal of Business & Economic Statistics,
American Statistical Association, vol. 13(2), pages 151-61, April.
- Mark Schreiner, 2001. "Evaluation and Microenterprise Programs," Development and Comp Systems 0108002, EconWPA, revised 27 Dec 2001.
- Cristian Aedo, . "The Impact of Training Policies in Latin America and the Caribbean: The Case of "Programa Joven"," ILADES-Georgetown University Working Papers inv131, Ilades-Georgetown University, Universidad Alberto Hurtado/School of Economics and Bussines.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Thomas Krichel).
If references are entirely missing, you can add them using this form.