IDEAS home Printed from https://ideas.repec.org/a/spr/qualqt/v47y2013i4p2225-2257.html
   My bibliography  Save this article

Merging the accountability and scientific research requirements of the No Child Left Behind Act: using cohort control groups

Author

Listed:
  • Jean Stockard

Abstract

This article shows how assessment data such as that mandated by the No Child Left Behind Act can be used to examine the effectiveness of educational interventions and meet the Act’s mandate for “scientifically based research.” Based on the classic research design literature a cohort control group and a cohort control group with historical comparisons design are suggested as internally valid analyses. The logic of the “grounded theory of generalized causal inference” is used to develop externally valid results. The procedure is illustrated with published data regarding the Reading Mastery curriculum. Empirical results are comparable to those obtained in meta-analyses of the curriculum, with effect sizes surpassing the usual criterion for educational importance. Implications for school officials and policy makers are discussed. Copyright Springer Science+Business Media B.V. 2013

Suggested Citation

  • Jean Stockard, 2013. "Merging the accountability and scientific research requirements of the No Child Left Behind Act: using cohort control groups," Quality & Quantity: International Journal of Methodology, Springer, vol. 47(4), pages 2225-2257, June.
  • Handle: RePEc:spr:qualqt:v:47:y:2013:i:4:p:2225-2257
    DOI: 10.1007/s11135-011-9652-5
    as

    Download full text from publisher

    File URL: http://hdl.handle.net/10.1007/s11135-011-9652-5
    Download Restriction: Access to full text is restricted to subscribers.

    File URL: https://libkey.io/10.1007/s11135-011-9652-5?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Steven Glazerman & Dan M. Levy & David Myers, 2003. "Nonexperimental Versus Experimental Estimates of Earnings Impacts," The ANNALS of the American Academy of Political and Social Science, , vol. 589(1), pages 63-93, September.
    2. Shadish, William R. & Clark, M. H. & Steiner, Peter M., 2008. "Can Nonrandomized Experiments Yield Accurate Answers? A Randomized Experiment Comparing Random and Nonrandom Assignments," Journal of the American Statistical Association, American Statistical Association, vol. 103(484), pages 1334-1344.
    3. Roberto Agodini & Mark Dynarski, "undated". "Are Experiments the Only Option? A Look at Dropout Prevention Programs," Mathematica Policy Research Reports 51241adbf9fa4a26add6d54c5, Mathematica Policy Research.
    4. Roberto Agodini & Mark Dynarski, 2004. "Are Experiments the Only Option? A Look at Dropout Prevention Programs," The Review of Economics and Statistics, MIT Press, vol. 86(1), pages 180-194, February.
    5. repec:mpr:mprres:3694 is not listed on IDEAS
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Fortson, Kenneth & Gleason, Philip & Kopa, Emma & Verbitsky-Savitz, Natalya, 2015. "Horseshoes, hand grenades, and treatment effects? Reassessing whether nonexperimental estimators are biased," Economics of Education Review, Elsevier, vol. 44(C), pages 100-113.
    2. Andrew P. Jaciw, 2016. "Applications of a Within-Study Comparison Approach for Evaluating Bias in Generalized Causal Inferences From Comparison Groups Studies," Evaluation Review, , vol. 40(3), pages 241-276, June.
    3. Vivian C. Wong & Peter M. Steiner & Kylie L. Anglin, 2018. "What Can Be Learned From Empirical Evaluations of Nonexperimental Methods?," Evaluation Review, , vol. 42(2), pages 147-175, April.
    4. Andrew P. Jaciw, 2016. "Assessing the Accuracy of Generalized Inferences From Comparison Group Studies Using a Within-Study Comparison Approach," Evaluation Review, , vol. 40(3), pages 199-240, June.
    5. Rebecca A. Maynard, 2006. "Presidential address: Evidence-based decision making: What will it take for the decision makers to care?," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 25(2), pages 249-265.
    6. Kenneth Fortson & Natalya Verbitsky-Savitz & Emma Kopa & Philip Gleason, 2012. "Using an Experimental Evaluation of Charter Schools to Test Whether Nonexperimental Comparison Group Methods Can Replicate Experimental Impact Estimates," Mathematica Policy Research Reports 27f871b5b7b94f3a80278a593, Mathematica Policy Research.
    7. Ferraro, Paul J. & Miranda, Juan José, 2014. "The performance of non-experimental designs in the evaluation of environmental programs: A design-replication study using a large-scale randomized experiment as a benchmark," Journal of Economic Behavior & Organization, Elsevier, vol. 107(PA), pages 344-365.
    8. Kenneth Fortson & Philip Gleason & Emma Kopa & Natalya Verbitsky-Savitz, "undated". "Horseshoes, Hand Grenades, and Treatment Effects? Reassessing Bias in Nonexperimental Estimators," Mathematica Policy Research Reports 1c24988cd5454dd3be51fbc2c, Mathematica Policy Research.
    9. Thomas D. Cook & William R. Shadish & Vivian C. Wong, 2008. "Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within-study comparisons," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 27(4), pages 724-750.
    10. Jason K. Luellen & William R. Shadish & M. H. Clark, 2005. "Propensity Scores," Evaluation Review, , vol. 29(6), pages 530-558, December.
    11. Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," NBER Working Papers 26080, National Bureau of Economic Research, Inc.
    12. William Bosshardt & Neela Manage, 2011. "Does Calculus Help in Principles of Economics Courses? Estimates Using Matching Estimators," The American Economist, Sage Publications, vol. 56(1), pages 29-37, May.
    13. Thomas D. Cook & Dominique Foray, 2007. "Building the Capacity to Experiment in Schools: A Case Study of the Institute of Educational Sciences in the US Department of Education," Economics of Innovation and New Technology, Taylor & Francis Journals, vol. 16(5), pages 385-402.
    14. Ben Weidmann & Luke Miratrix, 2021. "Lurking Inferential Monsters? Quantifying Selection Bias In Evaluations Of School Programs," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 40(3), pages 964-986, June.
    15. Aga, Deribe Assefa, 2016. "Factors affecting the success of development projects : A behavioral perspective," Other publications TiSEM 867ae95e-d53d-4a68-ad46-6, Tilburg University, School of Economics and Management.
    16. Sauermann, Jan & Stenberg, Anders, 2020. "Assessing Selection Bias in Non-Experimental Estimates of the Returns to Workplace Training," IZA Discussion Papers 13789, Institute of Labor Economics (IZA).
    17. Peter Z. Schochet & John Burghardt, 2007. "Using Propensity Scoring to Estimate Program-Related Subgroup Impacts in Experimental Program Evaluations," Evaluation Review, , vol. 31(2), pages 95-120, April.
    18. repec:mpr:mprres:4565 is not listed on IDEAS
    19. repec:mpr:mprres:7443 is not listed on IDEAS
    20. Peter M. Steiner, 2011. "Propensity Score Methods for Causal Inference: On the Relative Importance of Covariate Selection, Reliable Measurement, and Choice of Propensity Score Technique," Working Papers 09, AlmaLaurea Inter-University Consortium.
    21. Fatih Unlu & Douglas Lee Lauen & Sarah Crittenden Fuller & Tiffany Berglund & Elc Estrera, 2021. "Can Quasi‐Experimental Evaluations That Rely On State Longitudinal Data Systems Replicate Experimental Results?," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 40(2), pages 572-613, March.
    22. Elizabeth Ty Wilde & Robinson Hollister, 2007. "How close is close enough? Evaluating propensity score matching using data from a class size reduction experiment," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 26(3), pages 455-477.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:qualqt:v:47:y:2013:i:4:p:2225-2257. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.