IDEAS home Printed from https://ideas.repec.org/a/eee/ecoedu/v44y2015icp100-113.html
   My bibliography  Save this article

Horseshoes, hand grenades, and treatment effects? Reassessing whether nonexperimental estimators are biased

Author

Listed:
  • Fortson, Kenneth
  • Gleason, Philip
  • Kopa, Emma
  • Verbitsky-Savitz, Natalya

Abstract

Randomized controlled trials (RCTs) are considered the gold standard in estimating treatment effects. When an RCT is infeasible, regression modeling or statistical matching are often used instead. Nonexperimental methods such as these could produce unbiased estimates if the underlying assumptions hold, but those assumptions are usually not testable. Most prior studies testing nonexperimental designs find that they fail to produce unbiased estimates, but these studies have examined weaker evaluation designs. The present study addresses these limitations using student-level data based on a large-scale RCT of charter schools for which standardized achievement tests are the key outcome measure. The use of baseline data that are strongly predictive of the key outcome measures considerably reduces but might not completely eliminate bias.

Suggested Citation

  • Fortson, Kenneth & Gleason, Philip & Kopa, Emma & Verbitsky-Savitz, Natalya, 2015. "Horseshoes, hand grenades, and treatment effects? Reassessing whether nonexperimental estimators are biased," Economics of Education Review, Elsevier, vol. 44(C), pages 100-113.
  • Handle: RePEc:eee:ecoedu:v:44:y:2015:i:c:p:100-113
    DOI: 10.1016/j.econedurev.2014.11.001
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0272775714001022
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.econedurev.2014.11.001?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. James J. Heckman & Petra E. Todd, 2009. "A note on adapting propensity score matching and selection models to choice based samples," Econometrics Journal, Royal Economic Society, vol. 12(s1), pages 230-234, January.
    2. Steven Glazerman & Dan M. Levy & David Myers, 2003. "Nonexperimental Versus Experimental Estimates of Earnings Impacts," The ANNALS of the American Academy of Political and Social Science, , vol. 589(1), pages 63-93, September.
    3. Christina Clark Tuttle & Bing-ru Teh & Ira Nichols-Barrer & Brian P. Gill & Philip Gleason, "undated". "Student Characteristics and Achievement in 22 KIPP Middle Schools," Mathematica Policy Research Reports 69064a347d534ffa8947d7b6e, Mathematica Policy Research.
    4. Philip Gleason & Melissa Clark & Christina Clark Tuttle & Emily Dwoyer, "undated". "The Evaluation of Charter School Impacts," Mathematica Policy Research Reports 3066da11915a4b04a77b38848, Mathematica Policy Research.
    5. Shadish, William R. & Clark, M. H. & Steiner, Peter M., 2008. "Can Nonrandomized Experiments Yield Accurate Answers? A Randomized Experiment Comparing Random and Nonrandom Assignments," Journal of the American Statistical Association, American Statistical Association, vol. 103(484), pages 1334-1344.
    6. Michael Lechner, 2000. "An Evaluation of Public-Sector-Sponsored Continuous Vocational Training Programs in East Germany," Journal of Human Resources, University of Wisconsin Press, vol. 35(2), pages 347-375.
    7. Justine S. Hastings & Christopher A. Neilson & Seth D. Zimmerman, 2012. "The Effect of School Choice on Intrinsic Motivation and Academic Outcomes," Working Papers 2012-3, Princeton University. Economics Department..
    8. repec:mpr:mprres:6676 is not listed on IDEAS
    9. Robert Bifulco & Helen F. Ladd, 2006. "The Impacts of Charter Schools on Student Achievement: Evidence from North Carolina," Education Finance and Policy, MIT Press, vol. 1(1), pages 50-90, January.
    10. A. Smith, Jeffrey & E. Todd, Petra, 2005. "Does matching overcome LaLonde's critique of nonexperimental estimators?," Journal of Econometrics, Elsevier, vol. 125(1-2), pages 305-353.
    11. Guido W. Imbens & Jeffrey M. Wooldridge, 2009. "Recent Developments in the Econometrics of Program Evaluation," Journal of Economic Literature, American Economic Association, vol. 47(1), pages 5-86, March.
    12. Kenneth A. Couch & Robert Bifulco, 2012. "Can Nonexperimental Estimates Replicate Estimates Based on Random Assignment in Evaluations of School Choice? A Within‐Study Comparison," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 31(3), pages 729-751, June.
    13. Elizabeth Ty Wilde & Robinson Hollister, 2007. "How close is close enough? Evaluating propensity score matching using data from a class size reduction experiment," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 26(3), pages 455-477.
    14. Philip Gleason & Melissa Clark & Christina Clark Tuttle & Emily Dwoyer, 2010. "The Evaluation of Charter School Impacts (Presentation)," Mathematica Policy Research Reports 770e250b2ef343a3b1ec8c932, Mathematica Policy Research.
    15. Thomas D. Cook & William R. Shadish & Vivian C. Wong, 2008. "Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within-study comparisons," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 27(4), pages 724-750.
    16. Roberto Agodini & Mark Dynarski, "undated". "Are Experiments the Only Option? A Look at Dropout Prevention Programs," Mathematica Policy Research Reports 51241adbf9fa4a26add6d54c5, Mathematica Policy Research.
    17. Atila Abdulkadiroğlu & Joshua D. Angrist & Susan M. Dynarski & Thomas J. Kane & Parag A. Pathak, 2011. "Accountability and Flexibility in Public Schools: Evidence from Boston's Charters And Pilots," The Quarterly Journal of Economics, President and Fellows of Harvard College, vol. 126(2), pages 699-748.
    18. Alberto Abadie & Guido W. Imbens, 2008. "On the Failure of the Bootstrap for Matching Estimators," Econometrica, Econometric Society, vol. 76(6), pages 1537-1557, November.
    19. Thomas Fraker & Rebecca Maynard, 1987. "The Adequacy of Comparison Group Designs for Evaluations of Employment-Related Programs," Journal of Human Resources, University of Wisconsin Press, vol. 22(2), pages 194-227.
    20. LaLonde, Robert J, 1986. "Evaluating the Econometric Evaluations of Training Programs with Experimental Data," American Economic Review, American Economic Association, vol. 76(4), pages 604-620, September.
    21. Peikes, Deborah N. & Moreno, Lorenzo & Orzol, Sean Michael, 2008. "Propensity Score Matching: A Note of Caution for Evaluators of Social Programs," The American Statistician, American Statistical Association, vol. 62, pages 222-231, August.
    22. Roberto Agodini & Mark Dynarski, 2004. "Are Experiments the Only Option? A Look at Dropout Prevention Programs," The Review of Economics and Statistics, MIT Press, vol. 86(1), pages 180-194, February.
    23. Will Dobbie & Roland G. Fryer, 2011. "Are High-Quality Schools Enough to Increase Achievement among the Poor? Evidence from the Harlem Children's Zone," American Economic Journal: Applied Economics, American Economic Association, vol. 3(3), pages 158-187, July.
    24. repec:mpr:mprres:3694 is not listed on IDEAS
    25. Friedlander, Daniel & Robins, Philip K, 1995. "Evaluating Program Evaluations: New Evidence on Commonly Used Nonexperimental Methods," American Economic Review, American Economic Association, vol. 85(4), pages 923-937, September.
    26. repec:mpr:mprres:7293 is not listed on IDEAS
    27. repec:mpr:mprres:6720 is not listed on IDEAS
    28. repec:mpr:mprres:6673 is not listed on IDEAS
    29. Deborah N. Peikes & Lorenzo Moreno & Sean Michael Orzol, "undated". "Propensity Score Matching: A Note of Caution for Evaluators of Social Programs," Mathematica Policy Research Reports dd0866e4646a4e0ea77079d5b, Mathematica Policy Research.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," CID Working Papers 364, Center for International Development at Harvard University.
    2. Naihobe Gonzalez & Johanna Lacoe & Armando Yañez & Alicia Demers & Sarah Crissey & Natalie Larkin, "undated". "Oakland Unite 2017-2018 Strategy Evaluation: Life Coaching and Employment and Education Support for Youth at Risk of Violence," Mathematica Policy Research Reports 75d308710973407d8f2a3f25c, Mathematica Policy Research.
    3. Christina Clark Tuttle & Philip Gleason & Virginia Knechtel & Ira Nichols-Barrer & Kevin Booker & Gregory Chojnacki & Thomas Coen & Lisbeth Goble, "undated". "Understanding the Effect of KIPP as it Scales: Volume I, Impacts on Achievement and Other Outcomes," Mathematica Policy Research Reports 7d8e94c5e77a4a9c8bf09000d, Mathematica Policy Research.
    4. Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," NBER Working Papers 26080, National Bureau of Economic Research, Inc.
    5. Emma Persson & Sofie Persson & Ulf-G. Gerdtham & Katarina Steen Carlsson, 2019. "Effect of type 1 diabetes on school performance in a dynamic world: new analysis exploring Swedish register data," Applied Economics, Taylor & Francis Journals, vol. 51(24), pages 2606-2622, May.
    6. Naihobe Gonzalez & Johanna Lacoe & Ebo Dawson-Andoh & Armando Yañez & Natasha Nicolai & Sarah Crissey, "undated". "Evaluation of Oakland Unite: Year 1 Strategy Report," Mathematica Policy Research Reports bc1d603fb09a4c1eb2a3cd6b6, Mathematica Policy Research.
    7. Jaime Thomas & Sarah A. Avellar & John Deke & Philip Gleason, 2017. "Matched Comparison Group Design Standards in Systematic Reviews of Early Childhood Interventions," Evaluation Review, , vol. 41(3), pages 240-279, June.
    8. Vivian C. Wong & Peter M. Steiner & Kylie L. Anglin, 2018. "What Can Be Learned From Empirical Evaluations of Nonexperimental Methods?," Evaluation Review, , vol. 42(2), pages 147-175, April.
    9. Zhang, Chunqin & Juan, Zhicai & Xiao, Guangnian, 2015. "Do contractual practices affect technical efficiency? Evidence from public transport operators in China," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 80(C), pages 39-55.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Kenneth Fortson & Philip Gleason & Emma Kopa & Natalya Verbitsky-Savitz, "undated". "Horseshoes, Hand Grenades, and Treatment Effects? Reassessing Bias in Nonexperimental Estimators," Mathematica Policy Research Reports 1c24988cd5454dd3be51fbc2c, Mathematica Policy Research.
    2. Kenneth Fortson & Natalya Verbitsky-Savitz & Emma Kopa & Philip Gleason, 2012. "Using an Experimental Evaluation of Charter Schools to Test Whether Nonexperimental Comparison Group Methods Can Replicate Experimental Impact Estimates," Mathematica Policy Research Reports 27f871b5b7b94f3a80278a593, Mathematica Policy Research.
    3. Vivian C. Wong & Peter M. Steiner & Kylie L. Anglin, 2018. "What Can Be Learned From Empirical Evaluations of Nonexperimental Methods?," Evaluation Review, , vol. 42(2), pages 147-175, April.
    4. Fatih Unlu & Douglas Lee Lauen & Sarah Crittenden Fuller & Tiffany Berglund & Elc Estrera, 2021. "Can Quasi‐Experimental Evaluations That Rely On State Longitudinal Data Systems Replicate Experimental Results?," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 40(2), pages 572-613, March.
    5. Andrew P. Jaciw, 2016. "Applications of a Within-Study Comparison Approach for Evaluating Bias in Generalized Causal Inferences From Comparison Groups Studies," Evaluation Review, , vol. 40(3), pages 241-276, June.
    6. Andrew P. Jaciw, 2016. "Assessing the Accuracy of Generalized Inferences From Comparison Group Studies Using a Within-Study Comparison Approach," Evaluation Review, , vol. 40(3), pages 199-240, June.
    7. Ferraro, Paul J. & Miranda, Juan José, 2014. "The performance of non-experimental designs in the evaluation of environmental programs: A design-replication study using a large-scale randomized experiment as a benchmark," Journal of Economic Behavior & Organization, Elsevier, vol. 107(PA), pages 344-365.
    8. Ben Weidmann & Luke Miratrix, 2021. "Lurking Inferential Monsters? Quantifying Selection Bias In Evaluations Of School Programs," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 40(3), pages 964-986, June.
    9. Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," NBER Working Papers 26080, National Bureau of Economic Research, Inc.
    10. Heinrich, Carolyn J. & Mueser, Peter R. & Troske, Kenneth & Jeon, Kyung-Seong & Kahvecioglu, Daver C., 2009. "New Estimates of Public Employment and Training Program Net Impacts: A Nonexperimental Evaluation of the Workforce Investment Act Program," IZA Discussion Papers 4569, Institute of Labor Economics (IZA).
    11. Lechner, Michael & Wunsch, Conny, 2013. "Sensitivity of matching-based program evaluations to the availability of control variables," Labour Economics, Elsevier, vol. 21(C), pages 111-121.
    12. repec:mpr:mprres:7443 is not listed on IDEAS
    13. Sudhanshu Handa & John A. Maluccio, 2010. "Matching the Gold Standard: Comparing Experimental and Nonexperimental Evaluation Techniques for a Geographically Targeted Program," Economic Development and Cultural Change, University of Chicago Press, vol. 58(3), pages 415-447, April.
    14. Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," CID Working Papers 364, Center for International Development at Harvard University.
    15. Philip M. Gleason & Christina Clark Tuttle & Brian Gill & Ira Nichols-Barrer & Bing-ru Teh, 2014. "Do KIPP Schools Boost Student Achievement?," Education Finance and Policy, MIT Press, vol. 9(1), pages 36-58, January.
    16. Elizabeth Ty Wilde & Robinson Hollister, 2007. "How close is close enough? Evaluating propensity score matching using data from a class size reduction experiment," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 26(3), pages 455-477.
    17. Henrik Hansen & Ninja Ritter Klejnstrup & Ole Winckler Andersen, 2011. "A Comparison of Model-based and Design-based Impact Evaluations of Interventions in Developing Countries," IFRO Working Paper 2011/16, University of Copenhagen, Department of Food and Resource Economics.
    18. Peter M. Steiner & Vivian C. Wong, 2018. "Assessing Correspondence Between Experimental and Nonexperimental Estimates in Within-Study Comparisons," Evaluation Review, , vol. 42(2), pages 214-247, April.
    19. McLaughlin, Joanne Song, 2017. "Does Communist party membership pay? Estimating the economic returns to party membership in the labor market in China," Journal of Comparative Economics, Elsevier, vol. 45(4), pages 963-983.
    20. Flores, Carlos A. & Mitnik, Oscar A., 2009. "Evaluating Nonexperimental Estimators for Multiple Treatments: Evidence from Experimental Data," IZA Discussion Papers 4451, Institute of Labor Economics (IZA).
    21. Guido W. Imbens & Jeffrey M. Wooldridge, 2009. "Recent Developments in the Econometrics of Program Evaluation," Journal of Economic Literature, American Economic Association, vol. 47(1), pages 5-86, March.

    More about this item

    Keywords

    Treatment effects; Randomized controlled trials; Nonexperimental methods; Within-study comparison;
    All these keywords.

    JEL classification:

    • I21 - Health, Education, and Welfare - - Education - - - Analysis of Education
    • C10 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General - - - General
    • C18 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General - - - Methodolical Issues: General
    • C21 - Mathematical and Quantitative Methods - - Single Equation Models; Single Variables - - - Cross-Sectional Models; Spatial Models; Treatment Effect Models

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:ecoedu:v:44:y:2015:i:c:p:100-113. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/econedurev .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.