IDEAS home Printed from https://ideas.repec.org/
MyIDEAS: Log in (now much improved!) to save this paper

Site Selection Bias in Program Evaluation

Listed author(s):
  • Hunt Allcott

"Site selection bias" can occur when the probability that a program is adopted or evaluated is correlated with its impacts. I test for site selection bias in the context of the Opower energy conservation programs, using 111 randomized control trials involving 8.6 million households across the U.S. Predictions based on rich microdata from the first ten replications substantially overstate efficacy in the next 101 sites. Several mechanisms caused this positive selection. For example, utilities in more environmentalist areas are more likely to adopt the program, and their customers are more responsive to the treatment. Also, because utilities initially target treatment at higher-usage consumer subpopulations, efficacy drops as the program is later expanded. The results illustrate how program evaluations can still give systematically biased out-of-sample predictions, even after many replications.

If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.

File URL: http://www.nber.org/papers/w18373.pdf
Download Restriction: no

Paper provided by National Bureau of Economic Research, Inc in its series NBER Working Papers with number 18373.

as
in new window

Length:
Date of creation: Sep 2012
Publication status: published as Hunt Allcott, 2015. "Site Selection Bias in Program Evaluation," The Quarterly Journal of Economics, vol 130(3), pages 1117-1165.
Handle: RePEc:nbr:nberwo:18373
Note: EEE LS
Contact details of provider: Postal:
National Bureau of Economic Research, 1050 Massachusetts Avenue Cambridge, MA 02138, U.S.A.

Phone: 617-868-3900
Web page: http://www.nber.org
Email:


More information through EDIRC

References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:

as
in new window


  1. Bruno Crépon & Esther Duflo & Marc Gurgand & Roland Rathelot & Philippe Zamora, 2013. "Do Labor Market Policies have Displacement Effects? Evidence from a Clustered Randomized Experiment," The Quarterly Journal of Economics, Oxford University Press, vol. 128(2), pages 531-580.
  2. Joshua D. Angrist & Jörn-Steffen Pischke, 2010. "The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics," Journal of Economic Perspectives, American Economic Association, vol. 24(2), pages 3-30, Spring.
  3. Joshua D. Angrist & Parag A. Pathak & Christopher R. Walters, 2013. "Explaining Charter School Effectiveness," American Economic Journal: Applied Economics, American Economic Association, vol. 5(4), pages 1-27, October.
  4. James J. Heckman & Jeffrey Smith, 2000. "The Sensitivity of Experimental Impact Estimates (Evidence from the National JTPA Study)," NBER Chapters,in: Youth Employment and Joblessness in Advanced Countries, pages 331-356 National Bureau of Economic Research, Inc.
  5. DiNardo, John & Fortin, Nicole M & Lemieux, Thomas, 1996. "Labor Market Institutions and the Distribution of Wages, 1973-1992: A Semiparametric Approach," Econometrica, Econometric Society, vol. 64(5), pages 1001-1044, September.
  6. Joseph G. Altonji & Todd E. Elder & Christopher R. Taber, 2005. "Selection on Observed and Unobserved Variables: Assessing the Effectiveness of Catholic Schools," Journal of Political Economy, University of Chicago Press, vol. 113(1), pages 151-184, February.
  7. James J. Heckman & Hidehiko Ichimura & Petra E. Todd, 1997. "Matching As An Econometric Evaluation Estimator: Evidence from Evaluating a Job Training Programme," Review of Economic Studies, Oxford University Press, vol. 64(4), pages 605-654.
  8. David S. Lee & Thomas Lemieux, 2010. "Regression Discontinuity Designs in Economics," Journal of Economic Literature, American Economic Association, vol. 48(2), pages 281-355, June.
  9. Ian Ayres & Sophie Raseman & Alice Shih, 2013. "Evidence from Two Large Field Experiments that Peer Comparison Feedback Can Reduce Residential Energy Usage," Journal of Law, Economics and Organization, Oxford University Press, vol. 29(5), pages 992-1022, October.
  10. Jens Ludwig & Jeffrey R. Kling & Sendhil Mullainathan, 2011. "Mechanism Experiments and Policy Evaluations," Journal of Economic Perspectives, American Economic Association, vol. 25(3), pages 17-38, Summer.
  11. James J. Heckman & Jeffrey A. Smith, 1995. "Assessing the Case for Social Experiments," Journal of Economic Perspectives, American Economic Association, vol. 9(2), pages 85-110, Spring.
  12. Sylvain Chassang & Gerard Padro I Miquel & Erik Snowberg, 2012. "Selective Trials: A Principal-Agent Approach to Randomized Controlled Experiments," American Economic Review, American Economic Association, vol. 102(4), pages 1279-1309, June.
  13. Joshua D. Angrist, 2004. "Treatment effect heterogeneity in theory and practice," Economic Journal, Royal Economic Society, vol. 114(494), pages 52-83, 03.
  14. Dora L. Costa & Matthew E. Kahn, 2013. "Energy Conservation “Nudges” And Environmentalist Ideology: Evidence From A Randomized Residential Electricity Field Experiment," Journal of the European Economic Association, European Economic Association, vol. 11(3), pages 680-702, 06.
  15. Hunt Allcott & Michael Greenstone, 2012. "Is There an Energy Efficiency Gap?," Journal of Economic Perspectives, American Economic Association, vol. 26(1), pages 3-28, Winter.
  16. Allcott, Hunt, 2011. "Social norms and energy conservation," Journal of Public Economics, Elsevier, vol. 95(9-10), pages 1082-1095, October.
  17. Dehejia, Rajeev H, 2003. "Was There a Riverside Miracle? A Hierarchical Framework for Evaluating Programs with Grouped Data," Journal of Business & Economic Statistics, American Statistical Association, vol. 21(1), pages 1-11, January.
  18. James J. Heckman & Sergio Urzua & Edward Vytlacil, 2006. "Understanding Instrumental Variables in Models with Essential Heterogeneity," The Review of Economics and Statistics, MIT Press, vol. 88(3), pages 389-432, August.
  19. Raghabendra Chattopadhyay & Esther Duflo, 2004. "Women as Policy Makers: Evidence from a Randomized Policy Experiment in India," Econometrica, Econometric Society, vol. 72(5), pages 1409-1443, 09.
  20. Dean Karlan & Jonathan Zinman, 2009. "Observing Unobservables: Identifying Information Asymmetries With a Consumer Credit Field Experiment," Econometrica, Econometric Society, vol. 77(6), pages 1993-2008, November.
  21. Guido W. Imbens & Jeffrey M. Wooldridge, 2009. "Recent Developments in the Econometrics of Program Evaluation," Journal of Economic Literature, American Economic Association, vol. 47(1), pages 5-86, March.
  22. Toshi H. Arimura, Shanjun Li, Richard G. Newell, and Karen Palmer, 2012. "Cost-Effectiveness of Electricity Energy Efficiency Programs," The Energy Journal, International Association for Energy Economics, vol. 0(Number 2).
  23. LaLonde, Robert J, 1986. "Evaluating the Econometric Evaluations of Training Programs with Experimental Data," American Economic Review, American Economic Association, vol. 76(4), pages 604-620, September.
  24. Marianne Bertrand & Esther Duflo & Sendhil Mullainathan, 2004. "How Much Should We Trust Differences-In-Differences Estimates?," The Quarterly Journal of Economics, Oxford University Press, vol. 119(1), pages 249-275.
  25. Guido W. Imbens, 2010. "Better LATE Than Nothing: Some Comments on Deaton (2009) and Heckman and Urzua (2009)," Journal of Economic Literature, American Economic Association, vol. 48(2), pages 399-423, June.
  26. Elizabeth A. Stuart & Stephen R. Cole & Catherine P. Bradshaw & Philip J. Leaf, 2011. "The use of propensity scores to assess the generalizability of results from randomized trials," Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 174(2), pages 369-386, 04.
  27. Joseph Hotz, V. & Imbens, Guido W. & Mortimer, Julie H., 2005. "Predicting the efficacy of future training programs using past experiences at other locations," Journal of Econometrics, Elsevier, vol. 125(1-2), pages 241-270.
  28. Murphy, Kevin M & Topel, Robert H, 2002. "Estimation and Inference in Two-Step Econometric Models," Journal of Business & Economic Statistics, American Statistical Association, vol. 20(1), pages 88-97, January.
  29. Abhijit V. Banerjee & Shawn Cole & Esther Duflo & Leigh Linden, 2007. "Remedying Education: Evidence from Two Randomized Experiments in India," The Quarterly Journal of Economics, Oxford University Press, vol. 122(3), pages 1235-1264.
  30. Bruce D. Meyer, 1995. "Lessons from the U.S. Unemployment Insurance Experiments," Journal of Economic Literature, American Economic Association, vol. 33(1), pages 91-131, March.
  31. Xavier Giné & Dean Karlan, 2009. "Group versus Individual Liability: Long Term Evidence from Philippine Microcredit Lending Groups," Working Papers 970, Economic Growth Center, Yale University.
  32. Charles F. Manski, 2011. "Policy Analysis with Incredible Certitude," Economic Journal, Royal Economic Society, vol. 121(554), pages 261-289, 08.
  33. repec:feb:artefa:0087 is not listed on IDEAS
  34. Charles F. Manski, 1996. "Learning about Treatment Effects from Experiments with Random Assignment of Treatments," Journal of Human Resources, University of Wisconsin Press, vol. 31(4), pages 709-733.
  35. Heckman, James, 2013. "Sample selection bias as a specification error," Applied Econometrics, Publishing House "SINERGIA PRESS", vol. 31(3), pages 129-137.
  36. Levitt, Steven D. & List, John A., 2009. "Field experiments in economics: The past, the present, and the future," European Economic Review, Elsevier, vol. 53(1), pages 1-18, January.
  37. Edward Miguel & Michael Kremer, 2004. "Worms: Identifying Impacts on Education and Health in the Presence of Treatment Externalities," Econometrica, Econometric Society, vol. 72(1), pages 159-217, 01.
  38. Joshua Angrist & Victor Lavy & Analia Schlosser, 2010. "Multiple Experiments for the Causal Link between the Quantity and Quality of Children," Journal of Labor Economics, University of Chicago Press, vol. 28(4), pages 773-824, October.
  39. Atila Abdulkadiroğlu & Joshua D. Angrist & Susan M. Dynarski & Thomas J. Kane & Parag A. Pathak, 2011. "Accountability and Flexibility in Public Schools: Evidence from Boston's Charters And Pilots," The Quarterly Journal of Economics, Oxford University Press, vol. 126(2), pages 699-748.
  40. Hunt Allcott & Todd Rogers, 2014. "The Short-Run and Long-Run Effects of Behavioral Interventions: Experimental Evidence from Energy Conservation," American Economic Review, American Economic Association, vol. 104(10), pages 3003-3037, October.
  41. Lant Pritchett, Justin Sandefur, 2013. "Context Matters for Size: Why External Validity Claims and Development Practice Don't Mix-Working Paper 336," Working Papers 336, Center for Global Development.
  42. Ian Ayres & Sophie Raseman & Alice Shih, 2009. "Evidence from Two Large Field Experiments that Peer Comparison Feedback Can Reduce Residential Energy Usage," NBER Working Papers 15386, National Bureau of Economic Research, Inc.
  43. Allcott, Hunt, 2011. "Social norms and energy conservation," Journal of Public Economics, Elsevier, vol. 95(9), pages 1082-1095.
  44. Pieter A. Gautier & Bas van der Klaauw, 2012. "Selection in a field experiment with voluntary participation," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 27(1), pages 63-84, 01.
  45. Joshua Angrist & Ivan Fernandez-Val, 2010. "ExtrapoLATE-ing: External Validity and Overidentification in the LATE Framework," NBER Working Papers 16566, National Bureau of Economic Research, Inc.
Full references (including those not matched with items on IDEAS)

This item is not listed on Wikipedia, on a reading list or among the top items on IDEAS.

When requesting a correction, please mention this item's handle: RePEc:nbr:nberwo:18373. See general information about how to correct material in RePEc.

For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ()

If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

If references are entirely missing, you can add them using this form.

If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.

If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.

Please note that corrections may take a couple of weeks to filter through the various RePEc services.

This information is provided to you by IDEAS at the Research Division of the Federal Reserve Bank of St. Louis using RePEc data.