IDEAS home Printed from https://ideas.repec.org/
MyIDEAS: Login to save this paper or follow this series

Site Selection Bias in Program Evaluation

"Site selection bias" can occur when the probability that a program is adopted or evaluated is correlated with its impacts. I test for site selection bias in the context of the Opower energy conservation programs, using 111 randomized control trials involving 8.6 million households across the U.S. Predictions based on rich microdata from the first ten replications substantially overstate efficacy in the next 101 sites. Several mechanisms caused this positive selection. For example, utilities in more environmentalist areas are more likely to adopt the program, and their customers are more responsive to the treatment. Also, because utilities initially target treatment at higher-usage consumer subpopulations, efficacy drops as the program is later expanded. The results illustrate how program evaluations can still give systematically biased out-of-sample predictions, even after many replications.

If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.

File URL: http://www.nber.org/papers/w18373.pdf
Download Restriction: Access to the full text is generally limited to series subscribers, however if the top level domain of the client browser is in a developing country or transition economy free access is provided. More information about subscriptions and free access is available at http://www.nber.org/wwphelp.html. Free access is also available to older working papers.

As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it.

Paper provided by National Bureau of Economic Research, Inc in its series NBER Working Papers with number 18373.

as
in new window

Length:
Date of creation: Sep 2012
Date of revision:
Handle: RePEc:nbr:nberwo:18373
Note: EEE LS
Contact details of provider: Postal: National Bureau of Economic Research, 1050 Massachusetts Avenue Cambridge, MA 02138, U.S.A.
Phone: 617-868-3900
Web page: http://www.nber.org
Email:


More information through EDIRC

References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:

as in new window
  1. Arimura, Toshi H. & Newell, Richard G. & Palmer, Karen, 2009. "Cost-Effectiveness of Electricity Energy Efficiency Programs," Discussion Papers dp-09-48, Resources For the Future.
  2. Joshua D. Angrist, 2004. "Treatment effect heterogeneity in theory and practice," Economic Journal, Royal Economic Society, vol. 114(494), pages C52-C83, 03.
  3. Abhijit Banerjee & Shawn Cole & Esther Duflo & Leigh Linden, 2005. "Remedying Education: Evidence from Two Randomized Experiments in India," NBER Working Papers 11904, National Bureau of Economic Research, Inc.
  4. Marianne Bertrand & Esther Duflo & Sendhil Mullainathan, 2002. "How Much Should We Trust Differences-in-Differences Estimates?," NBER Working Papers 8841, National Bureau of Economic Research, Inc.
  5. Heckman, James J, 1979. "Sample Selection Bias as a Specification Error," Econometrica, Econometric Society, vol. 47(1), pages 153-61, January.
  6. Marianne Bertrand & Dean Karlan & Sendhil Mullainathan & Eldar Shafir & Jonathan Zinman, 2010. "What's Advertising Content Worth? Evidence from a Consumer Credit Marketing Field Experiment," The Quarterly Journal of Economics, MIT Press, vol. 125(1), pages 263-305, February.
  7. Guido Imbens & Jeffrey Wooldridge, 2008. "Recent developments in the econometrics of program evaluation," CeMMAP working papers CWP24/08, Centre for Microdata Methods and Practice, Institute for Fiscal Studies.
  8. LaLonde, Robert J, 1986. "Evaluating the Econometric Evaluations of Training Programs with Experimental Data," American Economic Review, American Economic Association, vol. 76(4), pages 604-20, September.
  9. David Card & Jochen Kluve & Andrea Weber, 2010. "Active Labour Market Policy Evaluations: A Meta-Analysis," Economic Journal, Royal Economic Society, vol. 120(548), pages F452-F477, November.
  10. Levitt, Steven D. & List, John A., 2009. "Field experiments in economics: The past, the present, and the future," European Economic Review, Elsevier, vol. 53(1), pages 1-18, January.
  11. Dora L. Costa & Matthew E. Kahn, 2010. "Energy Conservation "Nudges" and Environmentalist Ideology: Evidence from a Randomized Residential Electricity Field Experiment," NBER Working Papers 15939, National Bureau of Economic Research, Inc.
  12. Heckman, James J. & Urzua, Sergio & Vytlacil, Edward, 2006. "Understanding Instrumental Variables in Models with Essential Heterogeneity," IZA Discussion Papers 2320, Institute for the Study of Labor (IZA).
  13. John DiNardo & Nicole M. Fortin & Thomas Lemieux, 1995. "Labor Market Institutions and the Distribution of Wages, 1973-1992: A Semiparametric Approach," NBER Working Papers 5093, National Bureau of Economic Research, Inc.
  14. Charles Manski, 2011. "Policy analysis with incredible certitude," CeMMAP working papers CWP04/11, Centre for Microdata Methods and Practice, Institute for Fiscal Studies.
  15. Joshua D. Angrist & Jörn-Steffen Pischke, 2010. "The Credibility Revolution in Empirical Economics: How Better Research Design is taking the Con out of Econometrics," CEP Discussion Papers dp0976, Centre for Economic Performance, LSE.
  16. Bruno Crépon & Esther Duflo & Marc Gurgand & Roland Rathelot & Philippe Zamora, 2013. "Do Labor Market Policies have Displacement Effects? Evidence from a Clustered Randomized Experiment," The Quarterly Journal of Economics, Oxford University Press, vol. 128(2), pages 531-580.
  17. Joshua D. Angrist & Parag A. Pathak & Christopher R. Walters, 2013. "Explaining Charter School Effectiveness," American Economic Journal: Applied Economics, American Economic Association, vol. 5(4), pages 1-27, October.
  18. Murphy, Kevin M & Topel, Robert H, 1985. "Estimation and Inference in Two-Step Econometric Models," Journal of Business & Economic Statistics, American Statistical Association, vol. 3(4), pages 370-79, October.
  19. Joshua Angrist & Victor Lavy & Analia Schlosser, 2010. "Multiple Experiments for the Causal Link between the Quantity and Quality of Children," Journal of Labor Economics, University of Chicago Press, vol. 28(4), pages 773-824, October.
  20. Bruce D. Meyer, 1995. "Lessons from the U.S. Unemployment Insurance Experiments," Journal of Economic Literature, American Economic Association, vol. 33(1), pages 91-131, March.
  21. James J. Heckman & Jeffrey Smith, 2000. "The Sensitivity of Experimental Impact Estimates (Evidence from the National JTPA Study)," NBER Chapters, in: Youth Employment and Joblessness in Advanced Countries, pages 331-356 National Bureau of Economic Research, Inc.
  22. Esther Duflo & Raghabendra Chattopadhyay, 2004. "Women as policy makers: Evidence from a randomized policy experiment in india," Framed Field Experiments 00224, The Field Experiments Website.
  23. Atila Abdulkadiroglu & Joshua Angrist & Susan Dynarski & Thomas J. Kane & Parag Pathak, 2009. "Accountability and Flexibility in Public Schools: Evidence from Boston's Charters and Pilots," NBER Working Papers 15549, National Bureau of Economic Research, Inc.
  24. Guido W. Imbens, 2010. "Better LATE Than Nothing: Some Comments on Deaton (2009) and Heckman and Urzua (2009)," Journal of Economic Literature, American Economic Association, vol. 48(2), pages 399-423, June.
  25. Dean Karlan & Jonathan Zinman, 2009. "Observing Unobservables: Identifying Information Asymmetries With a Consumer Credit Field Experiment," Econometrica, Econometric Society, vol. 77(6), pages 1993-2008, November.
  26. Jens Ludwig & Jeffrey R. Kling & Sendhil Mullainathan, 2011. "Mechanism Experiments and Policy Evaluations," Journal of Economic Perspectives, American Economic Association, vol. 25(3), pages 17-38, Summer.
  27. repec:feb:artefa:0087 is not listed on IDEAS
  28. Ian Ayres & Sophie Raseman & Alice Shih, 2009. "Evidence from Two Large Field Experiments that Peer Comparison Feedback Can Reduce Residential Energy Usage," NBER Working Papers 15386, National Bureau of Economic Research, Inc.
  29. Elizabeth A. Stuart & Stephen R. Cole & Catherine P. Bradshaw & Philip J. Leaf, 2011. "The use of propensity scores to assess the generalizability of results from randomized trials," Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 174(2), pages 369-386, 04.
  30. Lant Pritchett, Justin Sandefur, 2013. "Context Matters for Size: Why External Validity Claims and Development Practice Don't Mix-Working Paper 336," Working Papers 336, Center for Global Development.
  31. Pieter A. Gautier & Bas van der Klaauw, 2012. "Selection in a field experiment with voluntary participation," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 27(1), pages 63-84, 01.
  32. Allcott, Hunt, 2011. "Social norms and energy conservation," Journal of Public Economics, Elsevier, vol. 95(9), pages 1082-1095.
  33. Charles F. Manski, 1996. "Learning about Treatment Effects from Experiments with Random Assignment of Treatments," Journal of Human Resources, University of Wisconsin Press, vol. 31(4), pages 709-733.
  34. Tessa Bold & Mwangi Kimenyi & Germano Mwabu & Alice Ng'ang'a & Justin Sandefur, 2013. "Scaling-up What Works: Experimental Evidence on External Validity in Kenyan Education," CSAE Working Paper Series 2013-04, Centre for the Study of African Economies, University of Oxford.
  35. Chassang, Sylvain & Padró i Miquel, Gerard & Snowberg, Erik, 2010. "Selective Trials: A Principal-Agent Approach to Randomized Controlled Experiments," CEPR Discussion Papers 8003, C.E.P.R. Discussion Papers.
  36. David S. Lee & Thomas Lemieux, 2010. "Regression Discontinuity Designs in Economics," Journal of Economic Literature, American Economic Association, vol. 48(2), pages 281-355, June.
  37. Joshua Angrist & Ivan Fernandez-Val, 2010. "ExtrapoLATE-ing: External Validity and Overidentification in the LATE Framework," NBER Working Papers 16566, National Bureau of Economic Research, Inc.
  38. Hunt Allcott & Michael Greenstone, 2012. "Is There an Energy Efficiency Gap?," NBER Working Papers 17766, National Bureau of Economic Research, Inc.
  39. Joseph G. Altonji & Todd E. Elder & Christopher R. Taber, 2000. "Selection on Observed and Unobserved Variables: Assessing the Effectiveness of Catholic Schools," NBER Working Papers 7831, National Bureau of Economic Research, Inc.
  40. Dehejia, Rajeev H, 2003. "Was There a Riverside Miracle? A Hierarchical Framework for Evaluating Programs with Grouped Data," Journal of Business & Economic Statistics, American Statistical Association, vol. 21(1), pages 1-11, January.
  41. Joseph Hotz, V. & Imbens, Guido W. & Mortimer, Julie H., 2005. "Predicting the efficacy of future training programs using past experiences at other locations," Journal of Econometrics, Elsevier, vol. 125(1-2), pages 241-270.
  42. Heckman, James J & Ichimura, Hidehiko & Todd, Petra E, 1997. "Matching as an Econometric Evaluation Estimator: Evidence from Evaluating a Job Training Programme," Review of Economic Studies, Wiley Blackwell, vol. 64(4), pages 605-54, October.
  43. James J. Heckman & Jeffrey A. Smith, 1995. "Assessing the Case for Social Experiments," Journal of Economic Perspectives, American Economic Association, vol. 9(2), pages 85-110, Spring.
  44. Hunt Allcott & Todd Rogers, 2014. "The Short-Run and Long-Run Effects of Behavioral Interventions: Experimental Evidence from Energy Conservation," American Economic Review, American Economic Association, vol. 104(10), pages 3003-37, October.
  45. Allcott, Hunt, 2011. "Social norms and energy conservation," Journal of Public Economics, Elsevier, vol. 95(9-10), pages 1082-1095, October.
  46. Karlan, Dean & Gine, Xavier, 2009. "Group versus Individual Liability: Long Term Evidence from Philippine Microcredit Lending Groups," Working Papers 61, Yale University, Department of Economics.
  47. Ian Ayres & Sophie Raseman & Alice Shih, 2013. "Evidence from Two Large Field Experiments that Peer Comparison Feedback Can Reduce Residential Energy Usage," Journal of Law, Economics and Organization, Oxford University Press, vol. 29(5), pages 992-1022, October.
  48. Edward Miguel & Michael Kremer, 2004. "Worms: Identifying Impacts on Education and Health in the Presence of Treatment Externalities," Econometrica, Econometric Society, vol. 72(1), pages 159-217, 01.
Full references (including those not matched with items on IDEAS)

This item is not listed on Wikipedia, on a reading list or among the top items on IDEAS.

When requesting a correction, please mention this item's handle: RePEc:nbr:nberwo:18373. See general information about how to correct material in RePEc.

For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ()

If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

If references are entirely missing, you can add them using this form.

If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.

If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.

Please note that corrections may take a couple of weeks to filter through the various RePEc services.

This information is provided to you by IDEAS at the Research Division of the Federal Reserve Bank of St. Louis using RePEc data.