IDEAS home Printed from https://ideas.repec.org/a/wly/jpamgt/v28y2009i1p169-172.html
   My bibliography  Save this article

Are the alternatives to randomized assignment nearly as good? Statistical corrections to nonrandomized evaluations

Author

Listed:
  • Maureen A. Pirog

    (No Affiliation)

  • Anne L. Buffardi

    (No Affiliation)

  • Colleen K. Chrisinger

    (No Affiliation)

  • Pradeep Singh

    (No Affiliation)

  • John Briney

    (No Affiliation)

Abstract

No abstract is available for this item.

Suggested Citation

  • Maureen A. Pirog & Anne L. Buffardi & Colleen K. Chrisinger & Pradeep Singh & John Briney, 2009. "Are the alternatives to randomized assignment nearly as good? Statistical corrections to nonrandomized evaluations," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 28(1), pages 169-172.
  • Handle: RePEc:wly:jpamgt:v:28:y:2009:i:1:p:169-172
    DOI: 10.1002/pam.20411
    as

    Download full text from publisher

    File URL: http://hdl.handle.net/10.1002/pam.20411
    File Function: Link to full text; subscription required
    Download Restriction: no

    File URL: https://libkey.io/10.1002/pam.20411?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. A. Smith, Jeffrey & E. Todd, Petra, 2005. "Does matching overcome LaLonde's critique of nonexperimental estimators?," Journal of Econometrics, Elsevier, vol. 125(1-2), pages 305-353.
    2. David Greenberg & Marvin Mandell & Matthew Onstott, 2000. "The dissemination and utilization of welfare-to-work experiments in state policymaking," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 19(3), pages 367-382.
    3. Richard P. Nathan, 2008. "The role of random assignment in social policy research," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 27(3), pages 606-606.
    4. Jason K. Luellen & William R. Shadish & M. H. Clark, 2005. "Propensity Scores," Evaluation Review, , vol. 29(6), pages 530-558, December.
    5. James Heckman & Hidehiko Ichimura & Jeffrey Smith & Petra Todd, 1998. "Characterizing Selection Bias Using Experimental Data," Econometrica, Econometric Society, vol. 66(5), pages 1017-1098, September.
    6. Buddelmeyer, Hielke & Skoufias, Emmanuel, 2003. "An Evaluation of the Performance of Regression Discontinuity Design on PROGRESA," IZA Discussion Papers 827, Institute of Labor Economics (IZA).
    7. Battistin, Erich & Rettore, Enrico, 2008. "Ineligibles and eligible non-participants as a double comparison group in regression-discontinuity designs," Journal of Econometrics, Elsevier, vol. 142(2), pages 715-730, February.
    8. Richard P. Nathan, 2008. "The role of random assignment in social policy research," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 27(2), pages 401-415.
    9. Roberto Agodini & Mark Dynarski, "undated". "Are Experiments the Only Option? A Look at Dropout Prevention Programs," Mathematica Policy Research Reports 51241adbf9fa4a26add6d54c5, Mathematica Policy Research.
    10. Roberto Agodini & Mark Dynarski, 2004. "Are Experiments the Only Option? A Look at Dropout Prevention Programs," The Review of Economics and Statistics, MIT Press, vol. 86(1), pages 180-194, February.
    11. Carolyn Heinrich, 2008. "False or Fitting Recognition? The Use of High Performance Bonuses in Motivating Organizational Achievements," Public administration issues, Higher School of Economics, issue 4, pages 72-104.
    12. Arceneaux, Kevin & Gerber, Alan S. & Green, Donald P., 2006. "Comparing Experimental and Matching Methods Using a Large-Scale Voter Mobilization Experiment," Political Analysis, Cambridge University Press, vol. 14(1), pages 37-62, January.
    13. Juan Jose Diaz & Sudhanshu Handa, 2006. "An Assessment of Propensity Score Matching as a Nonexperimental Impact Estimator: Evidence from Mexico’s PROGRESA Program," Journal of Human Resources, University of Wisconsin Press, vol. 41(2).
    14. Thomas Fraker & Rebecca Maynard, 1987. "The Adequacy of Comparison Group Designs for Evaluations of Employment-Related Programs," Journal of Human Resources, University of Wisconsin Press, vol. 22(2), pages 194-227.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Robert Haveman & Barbara Wolfe, 2012. "Long-Term Effects of Public Low-Income Housing Vouchers: Work, Neighborhood, Family Composition and Childcare Usage," CEPR Discussion Papers 667, Centre for Economic Policy Research, Research School of Economics, Australian National University.
    2. Carlson, Deven & Haveman, Robert & Kaplan, Thomas & Wolfe, Barbara, 2012. "Long-term effects of public low-income housing vouchers on neighborhood quality and household composition," Journal of Housing Economics, Elsevier, vol. 21(2), pages 101-120.
    3. Carlson, Deven & Haveman, Robert & Kaplan, Tom & Wolfe, Barbara, 2012. "Long-term earnings and employment effects of housing voucher receipt," Journal of Urban Economics, Elsevier, vol. 71(1), pages 128-150.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Ferraro, Paul J. & Miranda, Juan José, 2014. "The performance of non-experimental designs in the evaluation of environmental programs: A design-replication study using a large-scale randomized experiment as a benchmark," Journal of Economic Behavior & Organization, Elsevier, vol. 107(PA), pages 344-365.
    2. Justine Burns & Malcolm Kewsell & Rebecca Thornton, 2009. "Evaluating the Impact of Health Programmes," SALDRU Working Papers 40, Southern Africa Labour and Development Research Unit, University of Cape Town.
    3. Vivian C. Wong & Peter M. Steiner & Kylie L. Anglin, 2018. "What Can Be Learned From Empirical Evaluations of Nonexperimental Methods?," Evaluation Review, , vol. 42(2), pages 147-175, April.
    4. Henrik Hansen & Ninja Ritter Klejnstrup & Ole Winckler Andersen, 2011. "A Comparison of Model-based and Design-based Impact Evaluations of Interventions in Developing Countries," IFRO Working Paper 2011/16, University of Copenhagen, Department of Food and Resource Economics.
    5. Ravallion, Martin, 2008. "Evaluating Anti-Poverty Programs," Handbook of Development Economics, in: T. Paul Schultz & John A. Strauss (ed.), Handbook of Development Economics, edition 1, volume 4, chapter 59, pages 3787-3846, Elsevier.
    6. Andrew P. Jaciw, 2016. "Assessing the Accuracy of Generalized Inferences From Comparison Group Studies Using a Within-Study Comparison Approach," Evaluation Review, , vol. 40(3), pages 199-240, June.
    7. Rajeev Dehejia, 2013. "The Porous Dialectic: Experimental and Non-Experimental Methods in Development Economics," WIDER Working Paper Series wp-2013-011, World Institute for Development Economic Research (UNU-WIDER).
    8. Gonzalo Nunez-Chaim & Henry G. Overman & Capucine Riom, 2024. "Does subsidising business advice improve firm performance? Evidence from a large RCT," CEP Discussion Papers dp1977, Centre for Economic Performance, LSE.
    9. Dehejia Rajeev, 2015. "Experimental and Non-Experimental Methods in Development Economics: A Porous Dialectic," Journal of Globalization and Development, De Gruyter, vol. 6(1), pages 47-69, June.
    10. Duflo, Esther & Glennerster, Rachel & Kremer, Michael, 2008. "Using Randomization in Development Economics Research: A Toolkit," Handbook of Development Economics, in: T. Paul Schultz & John A. Strauss (ed.), Handbook of Development Economics, edition 1, volume 4, chapter 61, pages 3895-3962, Elsevier.
    11. Dehejia, Rajeev, 2013. "The Porous Dialectic: Experimental and Non-Experimental Methods in Development Economics," WIDER Working Paper Series 011, World Institute for Development Economic Research (UNU-WIDER).
    12. Sauermann, Jan & Stenberg, Anders, 2020. "Assessing Selection Bias in Non-Experimental Estimates of the Returns to Workplace Training," IZA Discussion Papers 13789, Institute of Labor Economics (IZA).
    13. Fatih Unlu & Douglas Lee Lauen & Sarah Crittenden Fuller & Tiffany Berglund & Elc Estrera, 2021. "Can Quasi‐Experimental Evaluations That Rely On State Longitudinal Data Systems Replicate Experimental Results?," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 40(2), pages 572-613, March.
    14. Kristen Harknett, 2006. "Does Receiving an Earnings Supplement Affect Union Formation? Estimating Effects for Program Participants Using Propensity Score Matching," Evaluation Review, , vol. 30(6), pages 741-778, December.
    15. Handa, Sudhanshu & Pineda, Heiling & Esquivel, Yannete & Lopez, Blancadilia & Gurdian, Nidia Veronica & Regalia, Ferdinando, 2009. "Non-formal basic education as a development priority: Evidence from Nicaragua," Economics of Education Review, Elsevier, vol. 28(4), pages 512-522, August.
    16. Fortson, Kenneth & Gleason, Philip & Kopa, Emma & Verbitsky-Savitz, Natalya, 2015. "Horseshoes, hand grenades, and treatment effects? Reassessing whether nonexperimental estimators are biased," Economics of Education Review, Elsevier, vol. 44(C), pages 100-113.
    17. Sudhanshu Handa & John A. Maluccio, 2010. "Matching the Gold Standard: Comparing Experimental and Nonexperimental Evaluation Techniques for a Geographically Targeted Program," Economic Development and Cultural Change, University of Chicago Press, vol. 58(3), pages 415-447, April.
    18. Andrew P. Jaciw, 2016. "Applications of a Within-Study Comparison Approach for Evaluating Bias in Generalized Causal Inferences From Comparison Groups Studies," Evaluation Review, , vol. 40(3), pages 241-276, June.
    19. repec:mpr:mprres:4778 is not listed on IDEAS
    20. Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," NBER Working Papers 26080, National Bureau of Economic Research, Inc.
    21. Flores, Carlos A. & Mitnik, Oscar A., 2009. "Evaluating Nonexperimental Estimators for Multiple Treatments: Evidence from Experimental Data," IZA Discussion Papers 4451, Institute of Labor Economics (IZA).

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:wly:jpamgt:v:28:y:2009:i:1:p:169-172. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Wiley Content Delivery (email available below). General contact details of provider: http://www3.interscience.wiley.com/journal/34787/home .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.