The Sensitivity of Experimental Impact Estimates (Evidence from the National JTPA Study)
In: Youth Employment and Joblessness in Advanced Countries
The recent experimental evaluation of the U.S. Job Training Partnership Act (JTPA) program found negative effects of training on the earnings of disadvantaged male youth and no effect on the earnings of disadvantaged female youth. These findings provided justification for Congress to cut the budget of JTPA's youth component by over 80 percent. In this paper, we examine the sensitivity of the experimental impact estimates along several dimensions of construction and interpretation. We find that the statistical significance of the male youth estimates is extremely fragile and that the magnitudes of the estimates for both youth groups are sensitive to nearly all the factors we consider. In particular, accounting for experimental control group members who substitute training from other providers leads to a much more positive picture regarding the effectiveness of JTPA classroom training. Our study indicates the value of sensitivity analyses in experimental evaluations and illustrates that experimental impact estimates, like those from nonexperimental analyses, require careful interpretation if they are to provide a reliable guide to policymakers.
(This abstract was borrowed from another version of this item.)
|This chapter was published in: ||This item is provided by National Bureau of Economic Research, Inc in its series NBER Chapters with number
6810.||Handle:|| RePEc:nbr:nberch:6810||Contact details of provider:|| Postal: National Bureau of Economic Research, 1050 Massachusetts Avenue Cambridge, MA 02138, U.S.A.|
Web page: http://www.nber.org
More information through EDIRC
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Katherine P. Dickinson & Terry R. Johnson & Richard W. West, 1987. "An Analysis of the Sensitivity of Quasi-Experimental Net Impact Estimates of Ceta Programs," Evaluation Review, , vol. 11(4), pages 452-472, August.
- James Heckman & Jeffrey Smith & Christopher Taber, 1994.
"Accounting for Dropouts in Evaluations of Social Experiments,"
NBER Technical Working Papers
0166, National Bureau of Economic Research, Inc.
- Heckman, J. & Smith, J. & Taber, C., 1994. "Accounting for Dropouts in Evaluations of Social Experiments," University of Chicago - Economics Research Center 94-3, Chicago - Economics Research Center.
- Bassi, Laurie J, 1984. "Estimating the Effect of Training Programs with Non-Random Selection," The Review of Economics and Statistics, MIT Press, vol. 66(1), pages 36-43, February.
- Robert J. LaLonde, 1995. "The Promise of Public Sector-Sponsored Training Programs," Journal of Economic Perspectives, American Economic Association, vol. 9(2), pages 149-168, Spring.
- repec:mpr:mprres:2737 is not listed on IDEAS
- Howard S. Bloom, 1984. "Accounting for No-Shows in Experimental Evaluation Designs," Evaluation Review, , vol. 8(2), pages 225-246, April. Full references (including those not matched with items on IDEAS)