Advanced Search
MyIDEAS: Login

Site Selection Bias in Program Evaluation

Contents:

Author Info

  • Hunt Allcott

Abstract

“Site selection bias” occurs when the probability that partners adopt or evaluate a program is correlated with treatment effects. I test for site selection bias in the context of the Opower energy conservation programs, using 111 randomized control trials (RCTs) involving 8.6 million households across the United States. Predictions based on rich microdata from the first ten replications substantially overstate efficacy in the next 101 sites. There is evidence of two positive selection mechanisms. First, local populations with stronger preferences for environmental conservation both encourage utilities to adopt the program and are more responsive to the treatment. Second, program managers initially target treatment at the most responsive consumer sub-populations, meaning that efficacy drops when utilities expand the program. While it may be optimal to initially target an intervention toward the most responsive populations, these results show how analysts can be systematically biased when extrapolating experimental results, even after many replications. I augment the Opower results by showing that microfinance institutions (MFIs) that run RCTs differ from the global population of MFIs and that hospitals that host clinical trials differ from the national population of hospitals.

Download Info

If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
File URL: http://www.nber.org/papers/w18373.pdf
Download Restriction: Access to the full text is generally limited to series subscribers, however if the top level domain of the client browser is in a developing country or transition economy free access is provided. More information about subscriptions and free access is available at http://www.nber.org/wwphelp.html. Free access is also available to older working papers.

As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it.

Bibliographic Info

Paper provided by National Bureau of Economic Research, Inc in its series NBER Working Papers with number 18373.

as in new window
Length:
Date of creation: Sep 2012
Date of revision:
Handle: RePEc:nbr:nberwo:18373

Note: EEE LS
Contact details of provider:
Postal: National Bureau of Economic Research, 1050 Massachusetts Avenue Cambridge, MA 02138, U.S.A.
Phone: 617-868-3900
Email:
Web page: http://www.nber.org
More information through EDIRC

Related research

Keywords:

Find related papers by JEL classification:

This paper has been announced in the following NEP Reports:

References

References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
as in new window
  1. Atila Abdulkadiroğlu & Joshua D. Angrist & Susan M. Dynarski & Thomas J. Kane & Parag A. Pathak, 2011. "Accountability and Flexibility in Public Schools: Evidence from Boston's Charters And Pilots," The Quarterly Journal of Economics, Oxford University Press, vol. 126(2), pages 699-748.
  2. Heckman, James J, 1979. "Sample Selection Bias as a Specification Error," Econometrica, Econometric Society, vol. 47(1), pages 153-61, January.
  3. Allcott, Hunt, 2011. "Social norms and energy conservation," Journal of Public Economics, Elsevier, vol. 95(9-10), pages 1082-1095, October.
  4. Joshua D. Angrist, 2004. "Treatment effect heterogeneity in theory and practice," Economic Journal, Royal Economic Society, vol. 114(494), pages C52-C83, 03.
  5. Guido W. Imbens & Jeffrey M. Wooldridge, 2009. "Recent Developments in the Econometrics of Program Evaluation," Journal of Economic Literature, American Economic Association, vol. 47(1), pages 5-86, March.
  6. Joseph G. Altonji & Todd E. Elder & Christopher R. Taber, 2005. "Selection on Observed and Unobserved Variables: Assessing the Effectiveness of Catholic Schools," Journal of Political Economy, University of Chicago Press, vol. 113(1), pages 151-184, February.
  7. Bruno Crépon & Esther Duflo & Marc Gurgand & Roland Rathelot & Philippe Zamora, 2012. "Do Labor Market Policies Have Displacement Effects? Evidence from a Clustered Randomized Experiment," NBER Working Papers 18597, National Bureau of Economic Research, Inc.
  8. James J. Heckman & Jeffrey Smith, 2000. "The Sensitivity of Experimental Impact Estimates (Evidence from the National JTPA Study)," NBER Chapters, in: Youth Employment and Joblessness in Advanced Countries, pages 331-356 National Bureau of Economic Research, Inc.
  9. Joshua D. Angrist & Parag A. Pathak & Christopher R. Walters, 2013. "Explaining Charter School Effectiveness," American Economic Journal: Applied Economics, American Economic Association, vol. 5(4), pages 1-27, October.
  10. Joshua D. Angrist & Jörn-Steffen Pischke, 2010. "The Credibility Revolution in Empirical Economics: How Better Research Design is Taking the Con out of Econometrics," Working Paper Series of the German Council for Social and Economic Data 142, German Council for Social and Economic Data (RatSWD).
  11. Dinardo, J. & Fortin, N.M. & Lemieux, T., 1994. "Labor Market Institutions and the Distribution of Wages, 1973-1992: a Semiparametric Approach," Cahiers de recherche 9406, Universite de Montreal, Departement de sciences economiques.
  12. Karlan, Dean S. & Zinman, Jonathan, 2007. "Observing Unobservables: Identifying Information Asymmetries with a Consumer Credit Field Experiment," CEPR Discussion Papers 6182, C.E.P.R. Discussion Papers.
  13. Charles F. Manski, 2010. "Policy Analysis with Incredible Certitude," NBER Working Papers 16207, National Bureau of Economic Research, Inc.
  14. repec:feb:artefa:0087 is not listed on IDEAS
  15. Tessa Bold & Mwangi Kimenyi & Germano Mwabu & Alice Ng'ang'a & Justin Sandefur, 2013. "Scaling-up What Works: Experimental Evidence on External Validity in Kenyan Education," CSAE Working Paper Series 2013-04, Centre for the Study of African Economies, University of Oxford.
  16. Hunt Allcott & Michael Greenstone, 2012. "Is There an Energy Efficiency Gap?," NBER Working Papers 17766, National Bureau of Economic Research, Inc.
  17. Marianne Bertrand & Esther Duflo & Sendhil Mullainathan, 2004. "How Much Should We Trust Differences-in-Differences Estimates?," The Quarterly Journal of Economics, MIT Press, vol. 119(1), pages 249-275, February.
  18. Jens Ludwig & Jeffrey R. Kling & Sendhil Mullainathan, 2011. "Mechanism Experiments and Policy Evaluations," Journal of Economic Perspectives, American Economic Association, vol. 25(3), pages 17-38, Summer.
  19. Sylvain Chassang & Gerard Padro i Miquel & Erik Snowberg, 2010. "Selective Trials: A Principal-Agent Approach to Randomized Controlled Experiments," NBER Working Papers 16343, National Bureau of Economic Research, Inc.
  20. Heckman, James J & Ichimura, Hidehiko & Todd, Petra E, 1997. "Matching as an Econometric Evaluation Estimator: Evidence from Evaluating a Job Training Programme," Review of Economic Studies, Wiley Blackwell, vol. 64(4), pages 605-54, October.
  21. James J. Heckman & Jeffrey A. Smith, 1995. "Assessing the Case for Social Experiments," Journal of Economic Perspectives, American Economic Association, vol. 9(2), pages 85-110, Spring.
  22. Edward Miguel & Michael Kremer, 2004. "Worms: Identifying Impacts on Education and Health in the Presence of Treatment Externalities," Econometrica, Econometric Society, vol. 72(1), pages 159-217, 01.
  23. James J. Heckman & Sergio Urzua & Edward Vytlacil, 2006. "Understanding Instrumental Variables in Models with Essential Heterogeneity," The Review of Economics and Statistics, MIT Press, vol. 88(3), pages 389-432, August.
  24. Elizabeth A. Stuart & Stephen R. Cole & Catherine P. Bradshaw & Philip J. Leaf, 2011. "The use of propensity scores to assess the generalizability of results from randomized trials," Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 174(2), pages 369-386, 04.
  25. Joshua Angrist & Victor Lavy & Analia Schlosser, 2010. "Multiple Experiments for the Causal Link between the Quantity and Quality of Children," Journal of Labor Economics, University of Chicago Press, vol. 28(4), pages 773-824, October.
  26. Charles F. Manski, 1996. "Learning about Treatment Effects from Experiments with Random Assignment of Treatments," Journal of Human Resources, University of Wisconsin Press, vol. 31(4), pages 709-733.
  27. David S. Lee & Thomas Lemieux, 2010. "Regression Discontinuity Designs in Economics," Journal of Economic Literature, American Economic Association, vol. 48(2), pages 281-355, June.
  28. Steven Levitt & John List, 2009. "Field experiments in economics: The past, the present, and the future," Artefactual Field Experiments 00079, The Field Experiments Website.
  29. Bruce D. Meyer, 1995. "Lessons from the U.S. Unemployment Insurance Experiments," Journal of Economic Literature, American Economic Association, vol. 33(1), pages 91-131, March.
  30. Esther Duflo & Abhijit Banerjee & Shawn Cole & Leigh Linden, 2006. "Remedying Education: Evidence from Two Randomised Experiments in India," Working Papers id:360, eSocialSciences.
  31. Rajeev H. Dehejia, 2002. "Was there a Riverside miracle? An hierarchical framework for evaluating programs with grouped data," Discussion Papers 0102-15, Columbia University, Department of Economics.
  32. Dora L. Costa & Matthew E. Kahn, 2010. "Energy Conservation "Nudges" and Environmentalist Ideology: Evidence from a Randomized Residential Electricity Field Experiment," NBER Working Papers 15939, National Bureau of Economic Research, Inc.
  33. Bertrand, Marianne & Karlan, Dean & Mullainathan, Sendhil & Shafir, Eldar & Zinman, Jonathan, 2009. "What's Advertising Content Worth? Evidence from a Consumer Credit Marketing Field Experiment," Working Papers 58, Yale University, Department of Economics.
  34. Joshua Angrist & Ivan Fernandez-Val, 2010. "ExtrapoLATE-ing: External Validity and Overidentification in the LATE Framework," NBER Working Papers 16566, National Bureau of Economic Research, Inc.
  35. LaLonde, Robert J, 1986. "Evaluating the Econometric Evaluations of Training Programs with Experimental Data," American Economic Review, American Economic Association, vol. 76(4), pages 604-20, September.
  36. Pieter A. Gautier & Bas van der Klaauw, 2012. "Selection in a field experiment with voluntary participation," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 27(1), pages 63-84, 01.
  37. Murphy, Kevin M & Topel, Robert H, 2002. "Estimation and Inference in Two-Step Econometric Models," Journal of Business & Economic Statistics, American Statistical Association, vol. 20(1), pages 88-97, January.
  38. Karlan, Dean & Gine, Xavier, 2009. "Group versus Individual Liability: Long Term Evidence from Philippine Microcredit Lending Groups," Working Papers 61, Yale University, Department of Economics.
  39. Lant Pritchett, Justin Sandefur, 2013. "Context Matters for Size: Why External Validity Claims and Development Practice Don't Mix-Working Paper 336," Working Papers 336, Center for Global Development.
  40. Ian Ayres & Sophie Raseman & Alice Shih, 2013. "Evidence from Two Large Field Experiments that Peer Comparison Feedback Can Reduce Residential Energy Usage," Journal of Law, Economics and Organization, Oxford University Press, vol. 29(5), pages 992-1022, October.
  41. Joseph Hotz, V. & Imbens, Guido W. & Mortimer, Julie H., 2005. "Predicting the efficacy of future training programs using past experiences at other locations," Journal of Econometrics, Elsevier, vol. 125(1-2), pages 241-270.
  42. Allcott, Hunt, 2011. "Social norms and energy conservation," Journal of Public Economics, Elsevier, vol. 95(9), pages 1082-1095.
  43. Raghabendra Chattopadhyay & Esther Duflo, 2004. "Women as Policy Makers: Evidence from a Randomized Policy Experiment in India," Econometrica, Econometric Society, vol. 72(5), pages 1409-1443, 09.
Full references (including those not matched with items on IDEAS)

Citations

Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
as in new window

Cited by:
  1. Jeffrey Hammer & Dean Spears, 2013. "Village sanitation externalities and children's human capital: Evidence from a randomized experiment by the Maharashtra government," Working Papers 1443, Princeton University, Woodrow Wilson School of Public and International Affairs, Center for Health and Wellbeing..
  2. Ferraro, Paul J. & Miranda, Juan José, 2013. "Heterogeneous treatment effects and mechanisms in information-based environmental policies: Evidence from a large-scale field experiment," Resource and Energy Economics, Elsevier, vol. 35(3), pages 356-379.
  3. Morduch, Jonathan & Ravi, Shamika & Bauchet, Jonathan, 2012. "Failure vs. Displacement: Why an Innovative Anti-Poverty Program Showed No Net Impact," CEI Working Paper Series 2012-05, Center for Economic Institutions, Institute of Economic Research, Hitotsubashi University.
  4. Tessa Bold & Mwangi Kimenyi & Germano Mwabu & Alice Ng'ang'a & Justin Sandefur, 2013. "Scaling-up What Works: Experimental Evidence on External Validity in Kenyan Education," CSAE Working Paper Series 2013-04, Centre for the Study of African Economies, University of Oxford.
  5. Woolcock, Michael, 2013. "Using case studies to explore the external validity of .complex. development interventions," Working Paper Series UNU-WIDER Research Paper , World Institute for Development Economic Research (UNU-WIDER).
  6. Allcott, Hunt & Rogers, Todd, 2012. "How Long Do Treatment Effects Last? Persistence and Durability of a Descriptive Norms Intervention's Effect on Energy Conservation," Working Paper Series rwp12-045, Harvard University, John F. Kennedy School of Government.
  7. Lant Pritchett, Justin Sandefur, 2013. "Context Matters for Size: Why External Validity Claims and Development Practice Don't Mix-Working Paper 336," Working Papers 336, Center for Global Development.
  8. Morduch, Jonathan & Ravi, Shamika & Bauchet, Jonathan, 2013. "Substitution Bias and External Validity: Why an Innovative Anti-poverty Program Showed no Net Impact," CEI Working Paper Series 2013-03, Center for Economic Institutions, Institute of Economic Research, Hitotsubashi University.

Lists

This item is not listed on Wikipedia, on a reading list or among the top items on IDEAS.

Statistics

Access and download statistics

Corrections

When requesting a correction, please mention this item's handle: RePEc:nbr:nberwo:18373. See general information about how to correct material in RePEc.

For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ().

If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

If references are entirely missing, you can add them using this form.

If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.

If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.

Please note that corrections may take a couple of weeks to filter through the various RePEc services.