Learning about social programs from experiments with random assignment of treatments
The importance of social programs to a diverse population creates a legitimate concern that the findings of evaluations be widely credible. The weaker are the assumptions imposed, the more widely credible are the findings. The classical argument for random assignment of treatments is viewed by many as enabling evaluation under weak assumptions, and has generated much interest in the conduct of experiments. But the classical argument does impose assumptions, and there often is good reason to doubt their realism. Some researchers, finding the classical assumptions implausible, impose other assumptions strong enough to identify treatment effects of interest. In contrast, the recent literature examined in this article explores the inferences that may be drawn from experimental data under assumptions weak enough to yield widely credible findings. This literature has two branches. One seeks out notions of treatment effect that are identified when the experimental data are combined with weak assumptions. The canonical finding is that the average treatment effect within some context-specific subpopulation is identified. The other branch specifies a population of a priori interest and seeks to learn about treatment effects in this population. Here the canonical finding is a bound on average treatment effects. The various approaches to the analysis of experiments are complementary from a mathematical perspective, but in tension as guides to evaluation practice. The reader of an evaluation reporting that some social program "works" or has "positive impact" should be careful to ascertain what treatment effect has been estimated and under what assumptions.
|Date of creation:|
|Date of revision:|
|Contact details of provider:|| Postal: 3412 Social Science Building, 1180 Observatory Drive, Madison, WI 53706|
Phone: (608) 262-6358
Fax: (608) 265-3119
Web page: http://www.ssc.wisc.edu/irp/dp/dplist.htm
More information through EDIRC
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- V. Joseph Hotz & Seth Sanders, . "Bounding Treatment Effects in Controlled and Natural Experiments Subject to Post-Randomization Treatment Choice," University of Chicago - Population Research Center 94-2, Chicago - Population Research Center.
- Alicia H. Munnell, 1987.
"Lessons from the income maintenance experiments: an overview,"
New England Economic Review,
Federal Reserve Bank of Boston, issue May, pages 32-45.
- Alicia H. Munnell, 1986. "Lessons from the income maintenance experiments: an overview," Conference Series ; [Proceedings], Federal Reserve Bank of Boston, vol. 30, pages 1-21.
- Jeffrey E. Harris, 1985. "Macroexperiments versus Microexperiments for Health Policy," NBER Chapters, in: Social Experimentation, pages 145-186 National Bureau of Economic Research, Inc.
- Manski, C.F., 1992. "Identification Problems in the Social Sciences," Working papers 9217, Wisconsin Madison - Social Systems.
- Heckman, James J. & Robb, Richard Jr., 1985. "Alternative methods for evaluating the impact of interventions : An overview," Journal of Econometrics, Elsevier, vol. 30(1-2), pages 239-267.
- Frank Stafford, 1985. "Income-Maintenance Policy and Work Effort: Learning from Experiments and Labor-Market Studies," NBER Chapters, in: Social Experimentation, pages 95-144 National Bureau of Economic Research, Inc.
- James J. Heckman, 1991. "Randomization and Social Policy Evaluation," NBER Technical Working Papers 0107, National Bureau of Economic Research, Inc.
- Charles F. Manski, 1993. "The Mixing Problem in Program Evaluation," NBER Technical Working Papers 0148, National Bureau of Economic Research, Inc.
- Manski, C.F., 1990. "The Selection Problem," Working papers 90-12, Wisconsin Madison - Social Systems.
- Aaron, Henry J, 1989. "Politics and the Professors Revisited," American Economic Review, American Economic Association, vol. 79(2), pages 1-15, May.
- Nancy Clements & James Heckman & Jeffrey Smith, 1994. "Making the Most Out Of Social Experiments: Reducing the Intrinsic Uncertainty in Evidence from Randomized Trials with an Application to the JTPA Exp," NBER Technical Working Papers 0149, National Bureau of Economic Research, Inc.
When requesting a correction, please mention this item's handle: RePEc:wop:wispod:1061-95. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Thomas Krichel)
If references are entirely missing, you can add them using this form.