IDEAS home Printed from https://ideas.repec.org/p/mpr/mprres/5a0d5dff375d42048799878be935e0df.html
   My bibliography  Save this paper

The BASIE (BAyeSian Interpretation of Estimates) Framework for Interpreting Findings from Impact Evaluations: A Practical Guide for Education Researchers

Author

Listed:
  • John Deke
  • Mariel Finucane
  • Daniel Thal

Abstract

This guide walks researchers through the key steps of applying BASIE, including selecting prior evidence, reporting impact estimates, interpreting impact estimates, and conducting sensitivity analyses.

Suggested Citation

  • John Deke & Mariel Finucane & Daniel Thal, "undated". "The BASIE (BAyeSian Interpretation of Estimates) Framework for Interpreting Findings from Impact Evaluations: A Practical Guide for Education Researchers," Mathematica Policy Research Reports 5a0d5dff375d42048799878be, Mathematica Policy Research.
  • Handle: RePEc:mpr:mprres:5a0d5dff375d42048799878be935e0df
    as

    Download full text from publisher

    File URL: https://www.mathematica.org/-/media/publications/pdfs/education/2022/basie-practical-guide-ncee-2022-005.pdf
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Hansen, Christian & McDonald, James B. & Newey, Whitney K., 2010. "Instrumental Variables Estimation With Flexible Distributions," Journal of Business & Economic Statistics, American Statistical Association, vol. 28(1), pages 13-25.
    2. LaLonde, Robert J, 1986. "Evaluating the Econometric Evaluations of Training Programs with Experimental Data," American Economic Review, American Economic Association, vol. 76(4), pages 604-620, September.
    3. Andrew Gelman & Christian Hennig, 2017. "Beyond subjective and objective in statistics," Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 180(4), pages 967-1033, October.
    4. Ben Weidmann & Luke Miratrix, 2021. "Lurking Inferential Monsters? Quantifying Selection Bias In Evaluations Of School Programs," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 40(3), pages 964-986, June.
    5. J. P. Royston, 1982. "Expected Normal Order Statistics (Exact and Approximate)," Journal of the Royal Statistical Society Series C, Royal Statistical Society, vol. 31(2), pages 161-165, June.
    6. Jelena Zurovac & Thomas D. Cook & John Deke & Mariel M. Finucane & Duncan Chaplin & Jared S. Coopersmith & Michael Barna & Lauren Vollmer Forrow, 2021. "Absolute and Relative Bias in Eight Common Observational Study Designs: Evidence from a Meta-analysis," Papers 2111.06941, arXiv.org, revised Nov 2021.
    7. David Kaplan, 2021. "On the Quantification of Model Uncertainty: A Bayesian Perspective," Psychometrika, Springer;The Psychometric Society, vol. 86(1), pages 215-238, March.
    8. Archy O. de Berker & Robb B. Rutledge & Christoph Mathys & Louise Marshall & Gemma F. Cross & Raymond J. Dolan & Sven Bestmann, 2016. "Computations of uncertainty mediate acute stress responses in humans," Nature Communications, Nature, vol. 7(1), pages 1-11, April.
    9. Thomas D. Cook & William R. Shadish & Vivian C. Wong, 2008. "Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within-study comparisons," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 27(4), pages 724-750.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," NBER Working Papers 26080, National Bureau of Economic Research, Inc.
    2. Anna Aizer & Shari Eli & Adriana Lleras-Muney & Keyoung Lee, 2020. "Do Youth Employment Programs Work? Evidence from the New Deal," NBER Working Papers 27103, National Bureau of Economic Research, Inc.
    3. Robin Jacob & Marie-Andree Somers & Pei Zhu & Howard Bloom, 2016. "The Validity of the Comparative Interrupted Time Series Design for Evaluating the Effect of School-Level Interventions," Evaluation Review, , vol. 40(3), pages 167-198, June.
    4. Ben Weidmann & Luke Miratrix, 2021. "Lurking Inferential Monsters? Quantifying Selection Bias In Evaluations Of School Programs," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 40(3), pages 964-986, June.
    5. Peter M. Steiner, 2011. "Propensity Score Methods for Causal Inference: On the Relative Importance of Covariate Selection, Reliable Measurement, and Choice of Propensity Score Technique," Working Papers 09, AlmaLaurea Inter-University Consortium.
    6. Yonatan Eyal, 2020. "Self-Assessment Variables as a Source of Information in the Evaluation of Intervention Programs: A Theoretical and Methodological Framework," SAGE Open, , vol. 10(1), pages 21582440198, January.
    7. Jared Coopersmith & Thomas D. Cook & Jelena Zurovac & Duncan Chaplin & Lauren V. Forrow, 2022. "Internal And External Validity Of The Comparative Interrupted Time‐Series Design: A Meta‐Analysis," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 41(1), pages 252-277, January.
    8. Fatih Unlu & Douglas Lee Lauen & Sarah Crittenden Fuller & Tiffany Berglund & Elc Estrera, 2021. "Can Quasi‐Experimental Evaluations That Rely On State Longitudinal Data Systems Replicate Experimental Results?," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 40(2), pages 572-613, March.
    9. Daniel Litwok, 2020. "Using Nonexperimental Methods to Address Noncompliance," Upjohn Working Papers 20-324, W.E. Upjohn Institute for Employment Research.
    10. David M. Rindskopf & William R. Shadish & M. H. Clark, 2018. "Using Bayesian Correspondence Criteria to Compare Results From a Randomized Experiment and a Quasi-Experiment Allowing Self-Selection," Evaluation Review, , vol. 42(2), pages 248-280, April.
    11. Fortson, Kenneth & Gleason, Philip & Kopa, Emma & Verbitsky-Savitz, Natalya, 2015. "Horseshoes, hand grenades, and treatment effects? Reassessing whether nonexperimental estimators are biased," Economics of Education Review, Elsevier, vol. 44(C), pages 100-113.
    12. Henrik Hansen & Ninja Ritter Klejnstrup & Ole Winckler Andersen, 2011. "A Comparison of Model-based and Design-based Impact Evaluations of Interventions in Developing Countries," IFRO Working Paper 2011/16, University of Copenhagen, Department of Food and Resource Economics.
    13. Bernard Black & Woochan Kim & Julia Nasev, 2021. "The Effect of Board Structure on Firm Disclosure and Behavior: A Case Study of Korea and a Comparison of Research Designs," Journal of Empirical Legal Studies, John Wiley & Sons, vol. 18(2), pages 328-376, June.
    14. Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," CID Working Papers 364, Center for International Development at Harvard University.
    15. Andrew P. Jaciw, 2016. "Applications of a Within-Study Comparison Approach for Evaluating Bias in Generalized Causal Inferences From Comparison Groups Studies," Evaluation Review, , vol. 40(3), pages 241-276, June.
    16. Vivian C. Wong & Peter M. Steiner & Kylie L. Anglin, 2018. "What Can Be Learned From Empirical Evaluations of Nonexperimental Methods?," Evaluation Review, , vol. 42(2), pages 147-175, April.
    17. Onur Altindag & Theodore J. Joyce & Julie A. Reeder, 2019. "Can Nonexperimental Methods Provide Unbiased Estimates of a Breastfeeding Intervention? A Within-Study Comparison of Peer Counseling in Oregon," Evaluation Review, , vol. 43(3-4), pages 152-188, June.
    18. Andrew P. Jaciw, 2016. "Assessing the Accuracy of Generalized Inferences From Comparison Group Studies Using a Within-Study Comparison Approach," Evaluation Review, , vol. 40(3), pages 199-240, June.
    19. Tarek Azzam & Michael Bates & David Fairris, 2019. "Do Learning Communities Increase First Year College Retention? Testing Sample Selection and External Validity of Randomized Control Trials," Working Papers 202002, University of California at Riverside, Department of Economics.
    20. Delbianco Fernando & Tohmé Fernando, 2023. "What is a relevant control?: An algorithmic proposal," Asociación Argentina de Economía Política: Working Papers 4643, Asociación Argentina de Economía Política.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:mpr:mprres:5a0d5dff375d42048799878be935e0df. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Joanne Pfleiderer or Cindy George (email available below). General contact details of provider: https://edirc.repec.org/data/mathius.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.