IDEAS home Printed from https://ideas.repec.org/a/sae/evarev/v13y1989i6p628-655.html
   My bibliography  Save this article

Analysis of No-Difference Findings in Evaluation Research

Author

Listed:
  • George Julnes

    (University of Michigan)

  • Lawrence B. Mohr

    (University of Michigan)

Abstract

Conclusions of no difference are becoming increasingly important in evaluation research. We delineate three major uses of no-difference findings and analyze their meanings. (1) No-differ ence findings in randomized experiments can be interpreted as support for conclusions of the absence of a meaningful treatment effect, but only if the proper analytic methods are used. (2) Statistically based conclusions in quasi-experiments do not allow causal statements about the treatment impact but do provide a metric to judge the size of the resulting difference. (3) Using no-difference findings to conclude equivalence on control variables is inefficient and potentially misleading. The final section of the article presents alternative methods by which conclusions of no difference may be supported when applicable. These methods include the use of arbitrarily high alpha levels, interval estimation, and power analysis.

Suggested Citation

  • George Julnes & Lawrence B. Mohr, 1989. "Analysis of No-Difference Findings in Evaluation Research," Evaluation Review, , vol. 13(6), pages 628-655, December.
  • Handle: RePEc:sae:evarev:v:13:y:1989:i:6:p:628-655
    DOI: 10.1177/0193841X8901300604
    as

    Download full text from publisher

    File URL: https://journals.sagepub.com/doi/10.1177/0193841X8901300604
    Download Restriction: no

    File URL: https://libkey.io/10.1177/0193841X8901300604?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Goodman, S.N. & Royall, R., 1988. "Evidence and scientific research," American Journal of Public Health, American Public Health Association, vol. 78(12), pages 1568-1574.
    2. Cohen, Patricia, 1982. "To be or not to be : Control and balancing of type I and type II errors," Evaluation and Program Planning, Elsevier, vol. 5(3), pages 247-253, January.
    3. Ashenfelter, Orley, 1987. "The case for evaluating training programs with randomized trials," Economics of Education Review, Elsevier, vol. 6(4), pages 333-338, August.
    4. Orley Ashenfelter, 1986. "The Case for Evaluating Training Programs with Randomized Trials," Working Papers 583, Princeton University, Department of Economics, Industrial Relations Section..
    5. Thomas Fraker & Rebecca Maynard, 1987. "The Adequacy of Comparison Group Designs for Evaluations of Employment-Related Programs," Journal of Human Resources, University of Wisconsin Press, vol. 22(2), pages 194-227.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. repec:mpr:mprres:4778 is not listed on IDEAS
    2. Deborah Peikes & Sean Orzol & Lorenzo Moreno & Nora Paxton, "undated". "State Partnership Initiative: Selection of Comparison Groups for the Evaluation and Selected Impact Estimates," Mathematica Policy Research Reports f8760335b9ab4a39bdf2c3533, Mathematica Policy Research.
    3. James J. Heckman, 1991. "Randomization and Social Policy Evaluation Revisited," NBER Technical Working Papers 0107, National Bureau of Economic Research, Inc.
    4. Clampit, Jack & Gaffney, Nolan & Fabian, Frances & Stafford, Thomas, 2023. "Institutional misalignment and escape-based FDI: A prospect theory lens," International Business Review, Elsevier, vol. 32(3).
    5. A. Smith, Jeffrey & E. Todd, Petra, 2005. "Does matching overcome LaLonde's critique of nonexperimental estimators?," Journal of Econometrics, Elsevier, vol. 125(1-2), pages 305-353.
    6. Metcalf, Charles E., 1997. "The Advantages of Experimental Designs for Evaluating Sex Education Programs," Children and Youth Services Review, Elsevier, vol. 19(7), pages 507-523, November.
    7. Joshua D. Angrist & Jörn-Steffen Pischke, 2010. "The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics," Journal of Economic Perspectives, American Economic Association, vol. 24(2), pages 3-30, Spring.
    8. Ari Hyytinen & Jaakko Meriläinen & Tuukka Saarimaa & Otto Toivanen & Janne Tukiainen, 2018. "When does regression discontinuity design work? Evidence from random election outcomes," Quantitative Economics, Econometric Society, vol. 9(2), pages 1019-1051, July.
    9. Ichimura, Hidehiko & Todd, Petra E., 2007. "Implementing Nonparametric and Semiparametric Estimators," Handbook of Econometrics, in: J.J. Heckman & E.E. Leamer (ed.), Handbook of Econometrics, edition 1, volume 6, chapter 74, Elsevier.
    10. Gonzalo Nunez-Chaim & Henry G. Overman & Capucine Riom, 2024. "Does subsidising business advice improve firm performance? Evidence from a large RCT," CEP Discussion Papers dp1977, Centre for Economic Performance, LSE.
    11. Peter Hull & Michal Kolesár & Christopher Walters, 2022. "Labor by design: contributions of David Card, Joshua Angrist, and Guido Imbens," Scandinavian Journal of Economics, Wiley Blackwell, vol. 124(3), pages 603-645, July.
    12. David Card & Jochen Kluve & Andrea Weber, 2010. "Active Labour Market Policy Evaluations: A Meta-Analysis," Economic Journal, Royal Economic Society, vol. 120(548), pages 452-477, November.
    13. Larry L. Orr, 2018. "The Role of Evaluation in Building Evidence-Based Policy," The ANNALS of the American Academy of Political and Social Science, , vol. 678(1), pages 51-59, July.
    14. Kenneth Fortson & Natalya Verbitsky-Savitz & Emma Kopa & Philip Gleason, 2012. "Using an Experimental Evaluation of Charter Schools to Test Whether Nonexperimental Comparison Group Methods Can Replicate Experimental Impact Estimates," Mathematica Policy Research Reports 27f871b5b7b94f3a80278a593, Mathematica Policy Research.
    15. Guido W. Imbens & Jeffrey M. Wooldridge, 2009. "Recent Developments in the Econometrics of Program Evaluation," Journal of Economic Literature, American Economic Association, vol. 47(1), pages 5-86, March.
    16. Peter R. Mueser & Kenneth R. Troske & Alexey Gorislavsky, 2007. "Using State Administrative Data to Measure Program Performance," The Review of Economics and Statistics, MIT Press, vol. 89(4), pages 761-783, November.
    17. David H. Dean & Robert C. Dolan & Robert M. Schmidt, 1999. "Evaluating the Vocational Rehabilitation Program Using Longitudinal Data," Evaluation Review, , vol. 23(2), pages 162-189, April.
    18. Ferraro, Paul J. & Miranda, Juan José, 2014. "The performance of non-experimental designs in the evaluation of environmental programs: A design-replication study using a large-scale randomized experiment as a benchmark," Journal of Economic Behavior & Organization, Elsevier, vol. 107(PA), pages 344-365.
    19. Ham, John C. & LaLonde, Robert J., 2005. "Special issue on Experimental and non-experimental evaluation of economic policy and models," Journal of Econometrics, Elsevier, vol. 125(1-2), pages 1-13.
    20. Nianbo Dong & Mark W. Lipsey, 2018. "Can Propensity Score Analysis Approximate Randomized Experiments Using Pretest and Demographic Information in Pre-K Intervention Research?," Evaluation Review, , vol. 42(1), pages 34-70, February.
    21. Astrid Grasdal, 2001. "The performance of sample selection estimators to control for attrition bias," Health Economics, John Wiley & Sons, Ltd., vol. 10(5), pages 385-398, July.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:sae:evarev:v:13:y:1989:i:6:p:628-655. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: SAGE Publications (email available below). General contact details of provider: .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.