IDEAS home Printed from https://ideas.repec.org/a/wly/jpamgt/v41y2022i1p252-277.html
   My bibliography  Save this article

Internal And External Validity Of The Comparative Interrupted Time‐Series Design: A Meta‐Analysis

Author

Listed:
  • Jared Coopersmith
  • Thomas D. Cook
  • Jelena Zurovac
  • Duncan Chaplin
  • Lauren V. Forrow

Abstract

This paper meta‐analyzes 12 heterogeneous studies that examine bias in the comparative interrupted time‐series design (CITS) that is often used to evaluate the effects of social policy interventions. To measure bias, each CITS impact estimate was differenced from the estimate derived from a theoretically unbiased causal benchmark study that tested the same hypothesis with the same treatment group, outcome data, and estimand. In 10 studies, the benchmark was a randomized experiment and in the other two it was a regression‐discontinuity study. Analyses revealed the average standardized CITS bias to be between −0.01 and 0.042 standard deviations; and all but one bias estimate from individual studies fell within 0.10 standard deviations of its benchmark, indicating that the near zero mean bias did not result from averaging many large single study differences. The low mean and generally tight distribution of individual bias estimates suggest that CITS studies are worth recommending for future causal hypothesis tests because: (1) over the studies examined, they generally resulted in high internal validity; and (2) they also promise high external validity because the empirical tests we synthesized occurred across a wide variety of settings, times, interventions, and outcomes.

Suggested Citation

  • Jared Coopersmith & Thomas D. Cook & Jelena Zurovac & Duncan Chaplin & Lauren V. Forrow, 2022. "Internal And External Validity Of The Comparative Interrupted Time‐Series Design: A Meta‐Analysis," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 41(1), pages 252-277, January.
  • Handle: RePEc:wly:jpamgt:v:41:y:2022:i:1:p:252-277
    DOI: 10.1002/pam.22361
    as

    Download full text from publisher

    File URL: https://doi.org/10.1002/pam.22361
    Download Restriction: no

    File URL: https://libkey.io/10.1002/pam.22361?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. John P A Ioannidis, 2005. "Why Most Published Research Findings Are False," PLOS Medicine, Public Library of Science, vol. 2(8), pages 1-1, August.
    2. Carpenter, Bob & Gelman, Andrew & Hoffman, Matthew D. & Lee, Daniel & Goodrich, Ben & Betancourt, Michael & Brubaker, Marcus & Guo, Jiqiang & Li, Peter & Riddell, Allen, 2017. "Stan: A Probabilistic Programming Language," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 76(i01).
    3. Steven Glazerman & Dan M. Levy & David Myers, 2003. "Nonexperimental Versus Experimental Estimates of Earnings Impacts," The ANNALS of the American Academy of Political and Social Science, , vol. 589(1), pages 63-93, September.
    4. Ankita Patnaik, 2019. "Reserving Time for Daddy: The Consequences of Fathers’ Quotas," Journal of Labor Economics, University of Chicago Press, vol. 37(4), pages 1009-1059.
    5. Rubin, Donald B., 2008. "Comment: The Design and Analysis of Gold Standard Randomized Experiments," Journal of the American Statistical Association, American Statistical Association, vol. 103(484), pages 1350-1353.
    6. Elizabeth Tipton & James E. Pustejovsky, 2015. "Small-Sample Adjustments for Tests of Moderators and Model Fit Using Robust Variance Estimation in Meta-Regression," Journal of Educational and Behavioral Statistics, , vol. 40(6), pages 604-634, December.
    7. Charles Michalopoulos & Howard S. Bloom & Carolyn J. Hill, 2004. "Can Propensity-Score Methods Match the Findings from a Random Assignment Evaluation of Mandatory Welfare-to-Work Programs?," The Review of Economics and Statistics, MIT Press, vol. 86(1), pages 156-179, February.
    8. Yang Tang & Thomas D. Cook & Yasemin Kisbu-Sakarya & Heinrich Hock & Hanley Chiang, 2017. "The Comparative Regression Discontinuity (CRD) Design: An Overview and Demonstration of its Performance Relative to Basic RD and the Randomized Experiment," Advances in Econometrics, in: Regression Discontinuity Designs, volume 38, pages 237-279, Emerald Group Publishing Limited.
    9. Imbens, Guido W. & Lemieux, Thomas, 2008. "Regression discontinuity designs: A guide to practice," Journal of Econometrics, Elsevier, vol. 142(2), pages 615-635, February.
    10. Duncan D. Chaplin & Thomas D. Cook & Jelena Zurovac & Jared S. Coopersmith & Mariel M. Finucane & Lauren N. Vollmer & Rebecca E. Morris, 2018. "The Internal And External Validity Of The Regression Discontinuity Design: A Meta‐Analysis Of 15 Within‐Study Comparisons," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 37(2), pages 403-429, March.
    11. Ashenfelter, Orley & Card, David, 1985. "Using the Longitudinal Structure of Earnings to Estimate the Effect of Training Programs," The Review of Economics and Statistics, MIT Press, vol. 67(4), pages 648-660, November.
    12. Elizabeth Ty Wilde & Robinson Hollister, 2007. "How close is close enough? Evaluating propensity score matching using data from a class size reduction experiment," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 26(3), pages 455-477.
    13. Thomas D. Cook & William R. Shadish & Vivian C. Wong, 2008. "Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within‐study comparisons," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 27(4), pages 724-750, September.
    14. Abadie, Alberto & Diamond, Alexis & Hainmueller, Jens, 2010. "Synthetic Control Methods for Comparative Case Studies: Estimating the Effect of California’s Tobacco Control Program," Journal of the American Statistical Association, American Statistical Association, vol. 105(490), pages 493-505.
    15. LaLonde, Robert J, 1986. "Evaluating the Econometric Evaluations of Training Programs with Experimental Data," American Economic Review, American Economic Association, vol. 76(4), pages 604-620, September.
    16. repec:mpr:mprres:3694 is not listed on IDEAS
    17. Wichman, Casey J. & Ferraro, Paul J., 2017. "A cautionary tale on using panel data estimators to measure program impacts," Economics Letters, Elsevier, vol. 151(C), pages 82-90.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Sam Sims & Jake Anders & Matthew Inglis & Hugues Lortie-Forgues & Ben Styles & Ben Weidmann, 2023. "Experimental education research: rethinking why, how and when to use random assignment," CEPEO Working Paper Series 23-07, UCL Centre for Education Policy and Equalising Opportunities, revised Aug 2023.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Vivian C. Wong & Peter M. Steiner & Kylie L. Anglin, 2018. "What Can Be Learned From Empirical Evaluations of Nonexperimental Methods?," Evaluation Review, , vol. 42(2), pages 147-175, April.
    2. Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," CID Working Papers 364, Center for International Development at Harvard University.
    3. Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," NBER Working Papers 26080, National Bureau of Economic Research, Inc.
    4. Andrew P. Jaciw, 2016. "Assessing the Accuracy of Generalized Inferences From Comparison Group Studies Using a Within-Study Comparison Approach," Evaluation Review, , vol. 40(3), pages 199-240, June.
    5. Travis St.Clair & Kelly Hallberg & Thomas D. Cook, 2016. "The Validity and Precision of the Comparative Interrupted Time-Series Design," Journal of Educational and Behavioral Statistics, , vol. 41(3), pages 269-299, June.
    6. Denis Fougère & Nicolas Jacquemet, 2020. "Policy Evaluation Using Causal Inference Methods," SciencePo Working papers Main hal-03455978, HAL.
    7. Guido W. Imbens & Jeffrey M. Wooldridge, 2009. "Recent Developments in the Econometrics of Program Evaluation," Journal of Economic Literature, American Economic Association, vol. 47(1), pages 5-86, March.
    8. Yang Tang & Thomas D. Cook, 2018. "Statistical Power for the Comparative Regression Discontinuity Design With a Pretest No-Treatment Control Function: Theory and Evidence From the National Head Start Impact Study," Evaluation Review, , vol. 42(1), pages 71-110, February.
    9. Fortson, Kenneth & Gleason, Philip & Kopa, Emma & Verbitsky-Savitz, Natalya, 2015. "Horseshoes, hand grenades, and treatment effects? Reassessing whether nonexperimental estimators are biased," Economics of Education Review, Elsevier, vol. 44(C), pages 100-113.
    10. Andrew P. Jaciw, 2016. "Applications of a Within-Study Comparison Approach for Evaluating Bias in Generalized Causal Inferences From Comparison Groups Studies," Evaluation Review, , vol. 40(3), pages 241-276, June.
    11. van der Klaauw, Bas, 2014. "From micro data to causality: Forty years of empirical labor economics," Labour Economics, Elsevier, vol. 30(C), pages 88-97.
    12. Robin Jacob & Marie-Andree Somers & Pei Zhu & Howard Bloom, 2016. "The Validity of the Comparative Interrupted Time Series Design for Evaluating the Effect of School-Level Interventions," Evaluation Review, , vol. 40(3), pages 167-198, June.
    13. Ben Weidmann & Luke Miratrix, 2021. "Lurking Inferential Monsters? Quantifying Selection Bias In Evaluations Of School Programs," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 40(3), pages 964-986, June.
    14. Greenstone, Michael & Gayer, Ted, 2009. "Quasi-experimental and experimental approaches to environmental economics," Journal of Environmental Economics and Management, Elsevier, vol. 57(1), pages 21-44, January.
    15. Joshua D. Angrist & Jörn-Steffen Pischke, 2010. "The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics," Journal of Economic Perspectives, American Economic Association, vol. 24(2), pages 3-30, Spring.
    16. Peter Hull & Michal Kolesár & Christopher Walters, 2022. "Labor by design: contributions of David Card, Joshua Angrist, and Guido Imbens," Scandinavian Journal of Economics, Wiley Blackwell, vol. 124(3), pages 603-645, July.
    17. Kenneth Fortson & Natalya Verbitsky-Savitz & Emma Kopa & Philip Gleason, 2012. "Using an Experimental Evaluation of Charter Schools to Test Whether Nonexperimental Comparison Group Methods Can Replicate Experimental Impact Estimates," Mathematica Policy Research Reports 27f871b5b7b94f3a80278a593, Mathematica Policy Research.
    18. Richard Blundell & Monica Costa Dias, 2009. "Alternative Approaches to Evaluation in Empirical Microeconomics," Journal of Human Resources, University of Wisconsin Press, vol. 44(3).
    19. Nianbo Dong & Mark W. Lipsey, 2018. "Can Propensity Score Analysis Approximate Randomized Experiments Using Pretest and Demographic Information in Pre-K Intervention Research?," Evaluation Review, , vol. 42(1), pages 34-70, February.
    20. Kenneth Fortson & Philip Gleason & Emma Kopa & Natalya Verbitsky-Savitz, "undated". "Horseshoes, Hand Grenades, and Treatment Effects? Reassessing Bias in Nonexperimental Estimators," Mathematica Policy Research Reports 1c24988cd5454dd3be51fbc2c, Mathematica Policy Research.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:wly:jpamgt:v:41:y:2022:i:1:p:252-277. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Wiley Content Delivery (email available below). General contact details of provider: http://www3.interscience.wiley.com/journal/34787/home .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.