Using Lagged Outcomes to Evaluate Bias in Value-Added Models
Value-added (VA) models measure agents' productivity based on the outcomes they produce. The utility of VA models for performance evaluation depends on the extent to which VA estimates are biased by selection. One common method of evaluating bias in VA is to test for balance in lagged values of the outcome. We show that such balance tests do not yield robust information about bias in value-added models using Monte Carlo simulations. Even unbiased VA estimates can be correlated with lagged outcomes. More generally, tests using lagged outcomes are uninformative about the degree of bias in misspecified VA models. The source of these results is that VA is itself estimated using historical data, leading to non-transparent correlations between VA and lagged outcomes.
Volume (Year): 106 (2016)
Issue (Month): 5 (May)
|Contact details of provider:|| Web page: https://www.aeaweb.org/aer/|
More information through EDIRC
|Order Information:||Web: https://www.aeaweb.org/subscribe.html|
References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Joshua D. Angrist & Peter D. Hull & Parag A. Pathak & Christopher R. Walters, 2017.
"Leveraging Lotteries for School Value-Added: Testing and Estimation,"
The Quarterly Journal of Economics,
Oxford University Press, vol. 132(2), pages 871-919.
- Joshua Angrist & Peter Hull & Parag A. Pathak & Christopher Walters, 2015. "Leveraging Lotteries for School Value-Added: Testing and Estimation," NBER Working Papers 21748, National Bureau of Economic Research, Inc.