IDEAS home Printed from https://ideas.repec.org/a/inm/ormnsc/v62y2016i6p1707-1718.html
   My bibliography  Save this article

Blinding Us to the Obvious? The Effect of Statistical Training on the Evaluation of Evidence

Author

Listed:
  • Blakeley B. McShane

    (Kellogg School of Management, Northwestern University, Evanston, Illinois 60208)

  • David Gal

    (College of Business Administration, University of Illinois at Chicago, Chicago, Illinois 60607)

Abstract

Statistical training helps individuals analyze and interpret data. However, the emphasis placed on null hypothesis significance testing in academic training and reporting may lead researchers to interpret evidence dichotomously rather than continuously. Consequently, researchers may either disregard evidence that fails to attain statistical significance or undervalue it relative to evidence that attains statistical significance. Surveys of researchers across a wide variety of fields (including medicine, epidemiology, cognitive science, psychology, business, and economics) show that a substantial majority does indeed do so. This phenomenon is manifest both in researchers’ interpretations of descriptions of evidence and in their likelihood judgments. Dichotomization of evidence is reduced though still present when researchers are asked to make decisions based on the evidence, particularly when the decision outcome is personally consequential. Recommendations are offered. This paper was accepted by Yuval Rottenstreich, judgment and decision making.

Suggested Citation

  • Blakeley B. McShane & David Gal, 2016. "Blinding Us to the Obvious? The Effect of Statistical Training on the Evaluation of Evidence," Management Science, INFORMS, vol. 62(6), pages 1707-1718, June.
  • Handle: RePEc:inm:ormnsc:v:62:y:2016:i:6:p:1707-1718
    DOI: 10.1287/mnsc.2015.2212
    as

    Download full text from publisher

    File URL: http://dx.doi.org/10.1287/mnsc.2015.2212
    Download Restriction: no

    File URL: https://libkey.io/10.1287/mnsc.2015.2212?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Deirdre N. McCloskey & Stephen T. Ziliak, 1996. "The Standard Error of Regressions," Journal of Economic Literature, American Economic Association, vol. 34(1), pages 97-114, March.
    2. John P A Ioannidis, 2005. "Why Most Published Research Findings Are False," PLOS Medicine, Public Library of Science, vol. 2(8), pages 1-1, August.
    3. Kevin Hoover & Mark Siegler, 2008. "Sound and fury: McCloskey and significance testing in economics," Journal of Economic Methodology, Taylor & Francis Journals, vol. 15(1), pages 1-37.
    4. Abel Brodeur & Mathias Lé & Marc Sangnier & Yanos Zylberberg, 2016. "Star Wars: The Empirics Strike Back," American Economic Journal: Applied Economics, American Economic Association, vol. 8(1), pages 1-32, January.
    5. Andreas Schwab & Eric Abrahamson & William H. Starbuck & Fiona Fidler, 2011. "PERSPECTIVE---Researchers Should Make Thoughtful Assessments Instead of Null-Hypothesis Significance Tests," Organization Science, INFORMS, vol. 22(4), pages 1105-1120, August.
    6. Ed Yong, 2012. "Replication studies: Bad copy," Nature, Nature, vol. 485(7398), pages 298-300, May.
    7. Gelman, Andrew & Stern, Hal, 2006. "The Difference Between," The American Statistician, American Statistical Association, vol. 60, pages 328-331, November.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Todd A. Hall & Sharique Hasan, 2022. "Organizational decision-making and the returns to experimentation," Journal of Organization Design, Springer;Organizational Design Community, vol. 11(4), pages 129-144, December.
    2. Roy Chen & Yan Chen & Yohanes E. Riyanto, 2021. "Best practices in replication: a case study of common information in coordination games," Experimental Economics, Springer;Economic Science Association, vol. 24(1), pages 2-30, March.
    3. Blakeley B. McShane & David Gal, 2017. "Rejoinder: Statistical Significance and the Dichotomization of Evidence," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 112(519), pages 904-908, July.
    4. Anderson, Brian S. & Wennberg, Karl & McMullen, Jeffery S., 2019. "Editorial: Enhancing quantitative theory-testing entrepreneurship research," Journal of Business Venturing, Elsevier, vol. 34(5), pages 1-1.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Hirschauer Norbert & Grüner Sven & Mußhoff Oliver & Becker Claudia, 2019. "Twenty Steps Towards an Adequate Inferential Interpretation of p-Values in Econometrics," Journal of Economics and Statistics (Jahrbuecher fuer Nationaloekonomie und Statistik), De Gruyter, vol. 239(4), pages 703-721, August.
    2. Nicolas Vallois & Dorian Jullien, 2017. "Estimating Rationality in Economics: A History of Statistical Methods in Experimental Economics," Working Papers halshs-01651070, HAL.
    3. Nicolas Vallois & Dorian Jullien, 2018. "A history of statistical methods in experimental economics," The European Journal of the History of Economic Thought, Taylor & Francis Journals, vol. 25(6), pages 1455-1492, November.
    4. Stephan B. Bruns & David I. Stern, 2019. "Lag length selection and p-hacking in Granger causality testing: prevalence and performance of meta-regression models," Empirical Economics, Springer, vol. 56(3), pages 797-830, March.
    5. Kim, Jae H. & Ji, Philip Inyeob, 2015. "Significance testing in empirical finance: A critical review and assessment," Journal of Empirical Finance, Elsevier, vol. 34(C), pages 1-14.
    6. Camerer, Colin & Dreber, Anna & Forsell, Eskil & Ho, Teck-Hua & Huber, Jurgen & Johannesson, Magnus & Kirchler, Michael & Almenberg, Johan & Altmejd, Adam & Chan, Taizan & Heikensten, Emma & Holzmeist, 2016. "Evaluating replicability of laboratory experiments in Economics," MPRA Paper 75461, University Library of Munich, Germany.
    7. Jesper W. Schneider, 2015. "Null hypothesis significance tests. A mix-up of two different theories: the basis for widespread confusion and numerous misinterpretations," Scientometrics, Springer;Akadémiai Kiadó, vol. 102(1), pages 411-432, January.
    8. Bruns, Stephan B. & Asanov, Igor & Bode, Rasmus & Dunger, Melanie & Funk, Christoph & Hassan, Sherif M. & Hauschildt, Julia & Heinisch, Dominik & Kempa, Karol & König, Johannes & Lips, Johannes & Verb, 2019. "Reporting errors and biases in published empirical findings: Evidence from innovation research," Research Policy, Elsevier, vol. 48(9), pages 1-1.
    9. Klaus E Meyer & Arjen Witteloostuijn & Sjoerd Beugelsdijk, 2017. "What’s in a p? Reassessing best practices for conducting and reporting hypothesis-testing research," Journal of International Business Studies, Palgrave Macmillan;Academy of International Business, vol. 48(5), pages 535-551, July.
    10. Alexander Frankel & Maximilian Kasy, 2022. "Which Findings Should Be Published?," American Economic Journal: Microeconomics, American Economic Association, vol. 14(1), pages 1-38, February.
    11. Stanley, T. D. & Doucouliagos, Chris, 2019. "Practical Significance, Meta-Analysis and the Credibility of Economics," IZA Discussion Papers 12458, Institute of Labor Economics (IZA).
    12. Peter J. Veazie, 2015. "Understanding Statistical Testing," SAGE Open, , vol. 5(1), pages 21582440145, January.
    13. Uwe Hassler & Marc‐Oliver Pohle, 2022. "Unlucky Number 13? Manipulating Evidence Subject to Snooping," International Statistical Review, International Statistical Institute, vol. 90(2), pages 397-410, August.
    14. Thomas Mayer, 2012. "Ziliak and McCloskey's Criticisms of Significance Tests: An Assessment," Econ Journal Watch, Econ Journal Watch, vol. 9(3), pages 256-297, September.
    15. Colin F. Camerer & Anna Dreber & Felix Holzmeister & Teck-Hua Ho & Jürgen Huber & Magnus Johannesson & Michael Kirchler & Gideon Nave & Brian A. Nosek & Thomas Pfeiffer & Adam Altmejd & Nick Buttrick , 2018. "Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015," Nature Human Behaviour, Nature, vol. 2(9), pages 637-644, September.
    16. Stephen T. Ziliak & Deirdre N. McCloskey, 2013. "We Agree That Statistical Significance Proves Essentially Nothing: A Rejoinder to Thomas Mayer," Econ Journal Watch, Econ Journal Watch, vol. 10(1), pages 97-107, January.
    17. Brian Fabo & Martina Jancokova & Elisabeth Kempf & Lubos Pastor, 2020. "Fifty Shades of QE: Conflicts of Interest in Economic Research," Working and Discussion Papers WP 5/2020, Research Department, National Bank of Slovakia.
    18. Bruns, Stephan B. & Ioannidis, John P.A., 2020. "Determinants of economic growth: Different time different answer?," Journal of Macroeconomics, Elsevier, vol. 63(C).
    19. Isaiah Andrews & Maximilian Kasy, 2019. "Identification of and Correction for Publication Bias," American Economic Review, American Economic Association, vol. 109(8), pages 2766-2794, August.
    20. Michaelides, Michael, 2021. "Large sample size bias in empirical finance," Finance Research Letters, Elsevier, vol. 41(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:ormnsc:v:62:y:2016:i:6:p:1707-1718. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.