IDEAS home Printed from https://ideas.repec.org/a/taf/jnlasa/v112y2017i519p885-895.html
   My bibliography  Save this article

Statistical Significance and the Dichotomization of Evidence

Author

Listed:
  • Blakeley B. McShane
  • David Gal

Abstract

In light of recent concerns about reproducibility and replicability, the ASA issued a Statement on Statistical Significance and p-values aimed at those who are not primarily statisticians. While the ASA Statement notes that statistical significance and p-values are “commonly misused and misinterpreted,” it does not discuss and document broader implications of these errors for the interpretation of evidence. In this article, we review research on how applied researchers who are not primarily statisticians misuse and misinterpret p-values in practice and how this can lead to errors in the interpretation of evidence. We also present new data showing, perhaps surprisingly, that researchers who are primarily statisticians are also prone to misuse and misinterpret p-values thus resulting in similar errors. In particular, we show that statisticians tend to interpret evidence dichotomously based on whether or not a p-value crosses the conventional 0.05 threshold for statistical significance. We discuss implications and offer recommendations.

Suggested Citation

  • Blakeley B. McShane & David Gal, 2017. "Statistical Significance and the Dichotomization of Evidence," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 112(519), pages 885-895, July.
  • Handle: RePEc:taf:jnlasa:v:112:y:2017:i:519:p:885-895
    DOI: 10.1080/01621459.2017.1289846
    as

    Download full text from publisher

    File URL: http://hdl.handle.net/10.1080/01621459.2017.1289846
    Download Restriction: Access to full text is restricted to subscribers.

    File URL: https://libkey.io/10.1080/01621459.2017.1289846?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Eleni Verykouki & Christos T. Nakas, 2023. "Adaptations on the Use of p -Values for Statistical Inference: An Interpretation of Messages from Recent Public Discussions," Stats, MDPI, vol. 6(2), pages 1-13, April.
    2. Luigi Pace & Alessandra Salvan, 2020. "Likelihood, Replicability and Robbins' Confidence Sequences," International Statistical Review, International Statistical Institute, vol. 88(3), pages 599-615, December.
    3. Jeffrey A. Mills & Gary Cornwall & Beau A. Sauley & Jeffrey R. Strawn, 2018. "Improving the Analysis of Randomized Controlled Trials: a Posterior Simulation Approach," BEA Working Papers 0157, Bureau of Economic Analysis.
    4. Glenn Shafer, 2021. "Testing by betting: A strategy for statistical and scientific communication," Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 184(2), pages 407-431, April.
    5. David J. Hand, 2022. "Trustworthiness of statistical inference," Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 185(1), pages 329-347, January.
    6. Anderson, Brian S., 2022. "What executives get wrong about statistics: Moving from statistical significance to effect sizes and practical impact," Business Horizons, Elsevier, vol. 65(3), pages 379-388.
    7. Maximilian Maier & Tyler J. VanderWeele & Maya B. Mathur, 2022. "Using selection models to assess sensitivity to publication bias: A tutorial and call for more routine use," Campbell Systematic Reviews, John Wiley & Sons, vol. 18(3), September.
    8. Hirschauer Norbert & Grüner Sven & Mußhoff Oliver & Becker Claudia, 2019. "Twenty Steps Towards an Adequate Inferential Interpretation of p-Values in Econometrics," Journal of Economics and Statistics (Jahrbuecher fuer Nationaloekonomie und Statistik), De Gruyter, vol. 239(4), pages 703-721, August.
    9. Bertoldi, Paolo & Mosconi, Rocco, 2020. "Do energy efficiency policies save energy? A new approach based on energy policy indicators (in the EU Member States)," Energy Policy, Elsevier, vol. 139(C).
    10. Maier, Maximilian & VanderWeele, Tyler & Mathur, Maya B, 2021. "Using Selection Models to Assess Sensitivity to Publication Bias: A Tutorial and Call for More Routine Use," MetaArXiv tp45u, Center for Open Science.
    11. Anderson, Brian S. & Wennberg, Karl & McMullen, Jeffery S., 2019. "Editorial: Enhancing quantitative theory-testing entrepreneurship research," Journal of Business Venturing, Elsevier, vol. 34(5), pages 1-1.
    12. Furukawa, Chishio, 2019. "Publication Bias under Aggregation Frictions: Theory, Evidence, and a New Correction Method," EconStor Preprints 194798, ZBW - Leibniz Information Centre for Economics.
    13. Sadri, Arash, 2022. "The Ultimate Cause of the “Reproducibility Crisis”: Reductionist Statistics," MetaArXiv yxba5, Center for Open Science.
    14. Strømland, Eirik, 2019. "Preregistration and reproducibility," Journal of Economic Psychology, Elsevier, vol. 75(PA).
    15. Han Wang & Sieglinde S Snapp & Monica Fisher & Frederi Viens, 2019. "A Bayesian analysis of longitudinal farm surveys in Central Malawi reveals yield determinants and site-specific management strategies," PLOS ONE, Public Library of Science, vol. 14(8), pages 1-17, August.
    16. Wennberg, Karl & Anderson, Brian S. & McMullen, Jeffrey, 2019. "2 Editorial: Enhancing Quantitative Theory-Testing Entrepreneurship Research," Ratio Working Papers 323, The Ratio Institute.
    17. J. M. Bauer & L. A. Reisch, 2019. "Behavioural Insights and (Un)healthy Dietary Choices: a Review of Current Evidence," Journal of Consumer Policy, Springer, vol. 42(1), pages 3-45, March.
    18. Maya B. Mathur & Tyler J. VanderWeele, 2020. "Sensitivity analysis for publication bias in meta‐analyses," Journal of the Royal Statistical Society Series C, Royal Statistical Society, vol. 69(5), pages 1091-1119, November.
    19. Tom Engsted, 2024. "What Is the False Discovery Rate in Empirical Research?," Econ Journal Watch, Econ Journal Watch, vol. 21(1), pages 1-92–112, March.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:taf:jnlasa:v:112:y:2017:i:519:p:885-895. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Longhurst (email available below). General contact details of provider: http://www.tandfonline.com/UASA20 .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.