IDEAS home Printed from https://ideas.repec.org/r/wly/jpamgt/v26y2007i3p455-477.html
   My bibliography  Save this item

How close is close enough? Evaluating propensity score matching using data from a class size reduction experiment

Citations

Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
as


Cited by:

  1. Katz, Lawrence & Duncan, Greg J. & Kling, Jeffrey R. & Kessler, Ronald C. & Ludwig, Jens & Sanbonmatsu, Lisa & Liebman, Jeffrey B., 2008. "What Can We Learn about Neighborhood Effects from the Moving to Opportunity Experiment?," Scholarly Articles 2766959, Harvard University Department of Economics.
  2. Kenneth Fortson & Natalya Verbitsky-Savitz & Emma Kopa & Philip Gleason, 2012. "Using an Experimental Evaluation of Charter Schools to Test Whether Nonexperimental Comparison Group Methods Can Replicate Experimental Impact Estimates," Mathematica Policy Research Reports 27f871b5b7b94f3a80278a593, Mathematica Policy Research.
  3. Barnow, Burt S., 2010. "Setting up social experiments: the good, the bad, and the ugly," Zeitschrift für ArbeitsmarktForschung - Journal for Labour Market Research, Institut für Arbeitsmarkt- und Berufsforschung (IAB), Nürnberg [Institute for Employment Research, Nuremberg, Germany], vol. 43(2), pages 91-105.
  4. Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," CID Working Papers 364, Center for International Development at Harvard University.
  5. Andrew P. Jaciw, 2016. "Applications of a Within-Study Comparison Approach for Evaluating Bias in Generalized Causal Inferences From Comparison Groups Studies," Evaluation Review, , vol. 40(3), pages 241-276, June.
  6. Luke Byrne Willard, 2012. "Does inflation targeting matter? A reassessment," Applied Economics, Taylor & Francis Journals, vol. 44(17), pages 2231-2244, June.
  7. Jared Coopersmith & Thomas D. Cook & Jelena Zurovac & Duncan Chaplin & Lauren V. Forrow, 2022. "Internal And External Validity Of The Comparative Interrupted Time‐Series Design: A Meta‐Analysis," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 41(1), pages 252-277, January.
  8. David J. Harding & Lisa Sanbonmatsu & Greg J. Duncan & Lisa A. Gennetian & Lawrence F. Katz & Ronald C. Kessler & Jeffrey R. Kling & Matthew Sciandra & Jens Ludwig, 2023. "Evaluating Contradictory Experimental and Nonexperimental Estimates of Neighborhood Effects on Economic Outcomes for Adults," Housing Policy Debate, Taylor & Francis Journals, vol. 33(2), pages 453-486, March.
  9. Kenneth Fortson & Philip Gleason & Emma Kopa & Natalya Verbitsky-Savitz, "undated". "Horseshoes, Hand Grenades, and Treatment Effects? Reassessing Bias in Nonexperimental Estimators," Mathematica Policy Research Reports 1c24988cd5454dd3be51fbc2c, Mathematica Policy Research.
  10. Barnow, Burt S., 2010. "Setting up social experiments: the good, the bad, and the ugly," Zeitschrift für ArbeitsmarktForschung - Journal for Labour Market Research, Institut für Arbeitsmarkt- und Berufsforschung (IAB), Nürnberg [Institute for Employment Research, Nuremberg, Germany], vol. 43(2), pages 91-105.
  11. Gary King & Emmanuela Gakidou & Nirmala Ravishankar & Ryan T. Moore & Jason Lakin & Manett Vargas & Martha María Téllez-Rojo & Juan Eugenio Hernández Ávila & Mauricio Hernández Ávila & Héctor Hernánde, 2007. "A “politically robust” experimental design for public policy evaluation, with application to the Mexican Universal Health Insurance program," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 26(3), pages 479-506.
  12. Barnow, Burt S., 2010. "Setting up social experiments: the good, the bad, and the ugly," Zeitschrift für ArbeitsmarktForschung - Journal for Labour Market Research, Institut für Arbeitsmarkt- und Berufsforschung (IAB), Nürnberg [Institute for Employment Research, Nuremberg, Germany], vol. 43(2), pages 91-105.
  13. Fatih Unlu & Douglas Lee Lauen & Sarah Crittenden Fuller & Tiffany Berglund & Elc Estrera, 2021. "Can Quasi‐Experimental Evaluations That Rely On State Longitudinal Data Systems Replicate Experimental Results?," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 40(2), pages 572-613, March.
  14. Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," NBER Working Papers 26080, National Bureau of Economic Research, Inc.
  15. Gennetian, Lisa A. & Hill, Heather D. & London, Andrew S. & Lopoo, Leonard M., 2010. "Maternal employment and the health of low-income young children," Journal of Health Economics, Elsevier, vol. 29(3), pages 353-363, May.
  16. Ferraro, Paul J. & Miranda, Juan José, 2014. "The performance of non-experimental designs in the evaluation of environmental programs: A design-replication study using a large-scale randomized experiment as a benchmark," Journal of Economic Behavior & Organization, Elsevier, vol. 107(PA), pages 344-365.
  17. José M. Cordero & Víctor Cristóbal & Daniel Santín, 2018. "Causal Inference On Education Policies: A Survey Of Empirical Studies Using Pisa, Timss And Pirls," Journal of Economic Surveys, Wiley Blackwell, vol. 32(3), pages 878-915, July.
  18. Nianbo Dong & Mark W. Lipsey, 2018. "Can Propensity Score Analysis Approximate Randomized Experiments Using Pretest and Demographic Information in Pre-K Intervention Research?," Evaluation Review, , vol. 42(1), pages 34-70, February.
  19. Richard P. Nathan, 2008. "The role of random assignment in social policy research," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 27(2), pages 401-415.
  20. David A. Freedman & Richard A. Berk, 2008. "Weighting Regressions by Propensity Scores," Evaluation Review, , vol. 32(4), pages 392-409, August.
  21. Vivian C. Wong & Peter M. Steiner & Kylie L. Anglin, 2018. "What Can Be Learned From Empirical Evaluations of Nonexperimental Methods?," Evaluation Review, , vol. 42(2), pages 147-175, April.
  22. Robert Bifulco, 2010. "Can Propensity Score Analysis Replicate Estimates Based on Random Assignment in Evaluations of School Choice? A Within-Study Comparison," Center for Policy Research Working Papers 124, Center for Policy Research, Maxwell School, Syracuse University.
  23. Fortson, Kenneth & Gleason, Philip & Kopa, Emma & Verbitsky-Savitz, Natalya, 2015. "Horseshoes, hand grenades, and treatment effects? Reassessing whether nonexperimental estimators are biased," Economics of Education Review, Elsevier, vol. 44(C), pages 100-113.
  24. Ben Weidmann & Luke Miratrix, 2021. "Lurking Inferential Monsters? Quantifying Selection Bias In Evaluations Of School Programs," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 40(3), pages 964-986, June.
  25. Gonzalo Nunez-Chaim & Henry G. Overman & Capucine Riom, 2024. "Does subsidising business advice improve firm performance? Evidence from a large RCT," CEP Discussion Papers dp1977, Centre for Economic Performance, LSE.
  26. Andrew P. Jaciw, 2016. "Assessing the Accuracy of Generalized Inferences From Comparison Group Studies Using a Within-Study Comparison Approach," Evaluation Review, , vol. 40(3), pages 199-240, June.
  27. David A. Freedman, 2009. "Limits of Econometrics," International Econometric Review (IER), Econometric Research Association, vol. 1(1), pages 5-17, April.
  28. Nianbo Dong & Elizabeth A. Stuart & David Lenis & Trang Quynh Nguyen, 2020. "Using Propensity Score Analysis of Survey Data to Estimate Population Average Treatment Effects: A Case Study Comparing Different Methods," Evaluation Review, , vol. 44(1), pages 84-108, February.
IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.