Using Performance Standards to Evaluate Social Programs with Incomplete Outcome Data: General Issues and Application to a Higher Education Block Grant Program
The basic idea of program evaluation is both simple and appealing. Program outcomes are measured and compared to some minimum performance standard or threshold. In practice, however, evaluation is quite difficult. Two fundamental problems of outcome measurement must be addressed. The first, which we call the problem of auxiliary outcomes, is that we do not observe outcome of interest. The second, which we call the problem of counterfactual outcomes, is that we do not observe the threshold standard. This paper examines how performance standards should be set and applied in the face of these problems in measuring outcomes. In particular, we consider the problem of evaluating the new World Bank-sponsored Quality of Under-graduate Education (QUE) program. This competitive block grant program is to be judged by the program's effects on student outcomes, not by the particular ways in which grantee depart-ments use their funds. Our central message is that the proper way to implement standards varies with the prior information that the evaluator can credibly bring to bear to compensate for incomplete outcome data. An evaluator, confronted with the auxiliary and counter-factual outcomes problems, should combine the available data with credible assumptions on treatment and outcomes. Given this information, the performance of a program may be deemed acceptable, unacceptable, or indeterminate.
To our knowledge, this item is not available for
download. To find whether it is available, there are three
1. Check below under "Related research" whether another version of this item is available online.
2. Check on the provider's web page whether it is in fact available.
3. Perform a search for a similarly titled item that would be available.
|Date of creation:|
|Date of revision:|
|Contact details of provider:|| Postal: 2040 Sheridan Road, Evanston, IL 60208-4100|
Web page: http://www.nwu.edu/IPR/publications/wpindex1.html
More information through EDIRC
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Manski, Charles F, 1990.
"Nonparametric Bounds on Treatment Effects,"
American Economic Review,
American Economic Association, vol. 80(2), pages 319-23, May.
- John V. Pepper, 1999.
"What Do Welfare-to-Work Demonstrations Reveal to Welfare Reformers?,"
JCPR Working Papers
105, Northwestern University/University of Chicago Joint Center for Poverty Research.
- John V. Pepper, 1999. "What Do Welfare-to-Work Demonstrations Reveal to Welfare Reformers?," Virginia Economics Online Papers 317, University of Virginia, Department of Economics.
- Charles F. Manski & John V. Pepper, 1998.
"Monotone Instrumental Variables with an Application to the Returns to Schooling,"
NBER Technical Working Papers
0224, National Bureau of Economic Research, Inc.
- Charles F. Manski & John V. Pepper, 2000. "Monotone Instrumental Variables, with an Application to the Returns to Schooling," Econometrica, Econometric Society, vol. 68(4), pages 997-1012, July.
- Charles F. Manski & John V. Pepper, 1998. "Monotone Instrumental Variables: With an Application to the Returns to Schooling," Virginia Economics Online Papers 308, University of Virginia, Department of Economics.
- Manski, C.F., 1992. "Identification Problems in the Social Sciences," Working papers 9217, Wisconsin Madison - Social Systems.
- John V. Pepper, 2000. "The Intergenerational Transmission Of Welfare Receipt: A Nonparametric Bounds Analysis," The Review of Economics and Statistics, MIT Press, vol. 82(3), pages 472-488, August.
- Manski, C.F., 1990. "The Selection Problem," Working papers 90-12, Wisconsin Madison - Social Systems.
- Charles F. Manski, 1997.
"Monotone Treatment Response,"
Econometric Society, vol. 65(6), pages 1311-1334, November.
- James J. Heckman & Jeffrey Smith & Nancy Clements, 1997. "Making The Most Out Of Programme Evaluations and Social Experiments: Accounting For Heterogeneity in Programme Impacts," Review of Economic Studies, Oxford University Press, vol. 64(4), pages 487-535.
- Daniel Friedlander & David H. Greenberg & Philip K. Robins, 1997. "Evaluating Government Training Programs for the Economically Disadvantaged," Journal of Economic Literature, American Economic Association, vol. 35(4), pages 1809-1855, December.
- V. Joseph Hotz & Charles H. Mullin & Seth G. Sanders, 1997. "Bounding Causal Effects Using Data from a Contaminated Natural Experiment: Analysing the Effects of Teenage Childbearing," Review of Economic Studies, Oxford University Press, vol. 64(4), pages 575-603.
- Heckman, James J. & Robb, Richard Jr., 1985. "Alternative methods for evaluating the impact of interventions : An overview," Journal of Econometrics, Elsevier, vol. 30(1-2), pages 239-267.
- Bjorklund, Anders & Moffitt, Robert, 1987. "The Estimation of Wage Gains and Welfare Gains in Self-selection," The Review of Economics and Statistics, MIT Press, vol. 69(1), pages 42-49, February.
When requesting a correction, please mention this item's handle: RePEc:wop:nwuipr:00-1. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Thomas Krichel)
If references are entirely missing, you can add them using this form.