IDEAS home Printed from https://ideas.repec.org/a/jns/jbstat/v236y2016i5p557-575.html
   My bibliography  Save this article

Die Interpretation des p-Wertes – Grundsätzliche Missverständnisse

Author

Listed:
  • Hirschauer Norbert
  • Grüner Sven

    (Professur für Unternehmensführung im Agribusiness, Martin-Luther-Universität Halle-Wittenberg, 06099 Halle (Saale))

  • Mußhoff Oliver

    (Arbeitsbereich Landwirtschaftliche Betriebslehre, Georg-August-Universität Göttingen, Platz der Göttinger Sieben 5, 37073 Göttingen)

  • Frey Ulrich
  • Theesfeld Insa

    (Professur für Agrar-, Umwelt- und Ernährungspolitik, Martin-Luther-Universität Halle-Wittenberg, 06099 Halle (Saale))

  • Wagner Peter

    (Professur für Landwirtschaftliche Betriebslehre, Martin-Luther-Universität Halle-Wittenberg, 06099 Halle (Saale))

Abstract

Der p-Wert wird vielfach als Goldstandard für Inferenzschlüsse angesehen. Zur Validierung statistischer Zusammenhänge hat sich die Konvention herausgebildet, möglichst geringe p-Werte einzufordern und bei Werten unterhalb gewisser Schwellen (z. B. 0,05) von statistisch signifikanten Ergebnissen zu sprechen. Häufig wird der p-Wert auch als Irrtumswahrscheinlichkeit bezeichnet. Beide Begriffe sind problematisch, da sie Missverständnissen Vorschub leisten. Hinzu kommt, dass das sog. p-hacking, d. h. die gezielte Suche nach Auswertungen, die zu statistisch signifikanten Ergebnissen führen, Verzerrungen hervorrufen und die Rate falscher Entdeckungen (false discovery rate) erhöhen kann. Fehlinterpretationen des p-Wertes und auswertungsbedingte Verzerrungen wurden über die Jahrzehnte hinweg immer wieder kritisch diskutiert. In der empirischen Forschung scheinen sie aber persistent zu sein und in den letzten Jahren wurde die p-Wert-Debatte wegen der Nicht-Reproduzierbarkeit vieler Studien zunehmend intensiv geführt. Angesichts der über die Disziplinen verstreuten und oft auf Einzelaspekte abzielenden Literatur zur p-Wert-Problematik beschreibt dieser Methodenkommentar systematisch die wichtigsten Probleme und diskutiert die entsprechenden Lösungsvorschläge.

Suggested Citation

  • Hirschauer Norbert & Grüner Sven & Mußhoff Oliver & Frey Ulrich & Theesfeld Insa & Wagner Peter, 2016. "Die Interpretation des p-Wertes – Grundsätzliche Missverständnisse," Journal of Economics and Statistics (Jahrbuecher fuer Nationaloekonomie und Statistik), De Gruyter, vol. 236(5), pages 557-575, October.
  • Handle: RePEc:jns:jbstat:v:236:y:2016:i:5:p:557-575
    DOI: 10.1515/jbnst-2015-1030
    as

    Download full text from publisher

    File URL: https://doi.org/10.1515/jbnst-2015-1030
    Download Restriction: no

    File URL: https://libkey.io/10.1515/jbnst-2015-1030?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Armstrong, J. Scott, 2007. "Significance tests harm progress in forecasting," International Journal of Forecasting, Elsevier, vol. 23(2), pages 321-327.
    2. John P A Ioannidis, 2005. "Why Most Published Research Findings Are False," PLOS Medicine, Public Library of Science, vol. 2(8), pages 1-1, August.
    3. Walter Krämer, 2011. "The Cult of Statistical Significance – What Economists Should and Should Not Do to Make their Data Talk," Schmollers Jahrbuch : Journal of Applied Social Science Studies / Zeitschrift für Wirtschafts- und Sozialwissenschaften, Duncker & Humblot, Berlin, vol. 131(3), pages 455-468.
    4. Maren Duvendack & Richard W. Palmer-Jones & W. Robert Reed, 2015. "Replications in Economics: A Progress Report," Econ Journal Watch, Econ Journal Watch, vol. 12(2), pages 164–191-1, May.
    5. John List & Sally Sadoff & Mathis Wagner, 2011. "So you want to run an experiment, now what? Some simple rules of thumb for optimal experimental design," Experimental Economics, Springer;Economic Science Association, vol. 14(4), pages 439-457, November.
    6. Deirdre N. McCloskey & Stephen T. Ziliak, 1996. "The Standard Error of Regressions," Journal of Economic Literature, American Economic Association, vol. 34(1), pages 97-114, March.
    7. Sellke T. & Bayarri M. J. & Berger J. O., 2001. "Calibration of rho Values for Testing Precise Null Hypotheses," The American Statistician, American Statistical Association, vol. 55, pages 62-71, February.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Jens Rommel & Meike Weltin, 2021. "Is There a Cult of Statistical Significance in Agricultural Economics?," Applied Economic Perspectives and Policy, John Wiley & Sons, vol. 43(3), pages 1176-1191, September.
    2. Heckelei, Thomas & Huettel, Silke & Odening, Martin & Rommel, Jens, 2021. "The replicability crisis and the p-value debate – what are the consequences for the agricultural and food economics community?," Discussion Papers 316369, University of Bonn, Institute for Food and Resource Economics.
    3. Alexander Herzog-Stein & Camille Logeay, 2019. "Short-Term macroeconomic evaluation of the German minimum wage with a VAR/VECM," IMK Working Paper 197-2019, IMK at the Hans Boeckler Foundation, Macroeconomic Policy Institute.
    4. Anica Veronika Fietz & Sven Grüner, 2017. "Transparency systems: do businesses in North Rhine-Westphalia (Germany) regret the cancellation of the Smiley scheme?," Agricultural and Food Economics, Springer;Italian Society of Agricultural Economics (SIDEA), vol. 5(1), pages 1-10, December.
    5. Hüttel, Silke & Hess, Sebastian, 2023. "Lessons from the p-value debate and the replication crisis for "open Q science" – the editor's perspective or: will the revolution devour its children?," DARE Discussion Papers 2302, Georg-August University of Göttingen, Department of Agricultural Economics and Rural Development (DARE).
    6. Hirschauer Norbert & Grüner Sven & Mußhoff Oliver & Becker Claudia, 2019. "Twenty Steps Towards an Adequate Inferential Interpretation of p-Values in Econometrics," Journal of Economics and Statistics (Jahrbuecher fuer Nationaloekonomie und Statistik), De Gruyter, vol. 239(4), pages 703-721, August.
    7. Aparo, Nathaline Onek & Odongo, Walter & De Steur, Hans, 2022. "Unraveling heterogeneity in farmer's adoption of mobile phone technologies: A systematic review," Technological Forecasting and Social Change, Elsevier, vol. 185(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Hirschauer Norbert & Grüner Sven & Mußhoff Oliver & Becker Claudia, 2019. "Twenty Steps Towards an Adequate Inferential Interpretation of p-Values in Econometrics," Journal of Economics and Statistics (Jahrbuecher fuer Nationaloekonomie und Statistik), De Gruyter, vol. 239(4), pages 703-721, August.
    2. Jesper W. Schneider, 2015. "Null hypothesis significance tests. A mix-up of two different theories: the basis for widespread confusion and numerous misinterpretations," Scientometrics, Springer;Akadémiai Kiadó, vol. 102(1), pages 411-432, January.
    3. Kim, Jae H. & Ji, Philip Inyeob, 2015. "Significance testing in empirical finance: A critical review and assessment," Journal of Empirical Finance, Elsevier, vol. 34(C), pages 1-14.
    4. Jyotirmoy Sarkar, 2018. "Will P†Value Triumph over Abuses and Attacks?," Biostatistics and Biometrics Open Access Journal, Juniper Publishers Inc., vol. 7(4), pages 66-71, July.
    5. Stephen T. Ziliak & Deirdre N. McCloskey, 2013. "We Agree That Statistical Significance Proves Essentially Nothing: A Rejoinder to Thomas Mayer," Econ Journal Watch, Econ Journal Watch, vol. 10(1), pages 97-107, January.
    6. Michaelides, Michael, 2021. "Large sample size bias in empirical finance," Finance Research Letters, Elsevier, vol. 41(C).
    7. Eszter Czibor & David Jimenez‐Gomez & John A. List, 2019. "The Dozen Things Experimental Economists Should Do (More of)," Southern Economic Journal, John Wiley & Sons, vol. 86(2), pages 371-432, October.
    8. Stephan B. Bruns & David I. Stern, 2019. "Lag length selection and p-hacking in Granger causality testing: prevalence and performance of meta-regression models," Empirical Economics, Springer, vol. 56(3), pages 797-830, March.
    9. Emma von Essen & Marieke Huysentruyt & Topi Miettinen, 2019. "Exploration in Teams and the Encouragement Effect: Theory and Evidence," Economics Working Papers 2019-10, Department of Economics and Business Economics, Aarhus University.
    10. Brian Albert Monroe, 2020. "The statistical power of individual-level risk preference estimation," Journal of the Economic Science Association, Springer;Economic Science Association, vol. 6(2), pages 168-188, December.
    11. W. Robert Reed, 2018. "A Primer on the ‘Reproducibility Crisis’ and Ways to Fix It," Australian Economic Review, The University of Melbourne, Melbourne Institute of Applied Economic and Social Research, vol. 51(2), pages 286-300, June.
    12. Black, Bernard & Hollingsworth, Alex & Nunes, Letícia & Simon, Kosali, 2022. "Simulated power analyses for observational studies: An application to the Affordable Care Act Medicaid expansion," Journal of Public Economics, Elsevier, vol. 213(C).
    13. Jesper W. Schneider, 2018. "NHST is still logically flawed," Scientometrics, Springer;Akadémiai Kiadó, vol. 115(1), pages 627-635, April.
    14. Zacharias Maniadis & Fabio Tufano & John A. List, 2017. "To Replicate or Not To Replicate? Exploring Reproducibility in Economics through the Lens of a Model and a Pilot Study," Economic Journal, Royal Economic Society, vol. 127(605), pages 209-235, October.
    15. Schneider, Jesper W., 2013. "Caveats for using statistical significance tests in research assessments," Journal of Informetrics, Elsevier, vol. 7(1), pages 50-62.
    16. Nosek, Brian A. & Ebersole, Charles R. & DeHaven, Alexander Carl & Mellor, David Thomas, 2018. "The Preregistration Revolution," OSF Preprints 2dxu5, Center for Open Science.
    17. Bruns, Stephan B. & Asanov, Igor & Bode, Rasmus & Dunger, Melanie & Funk, Christoph & Hassan, Sherif M. & Hauschildt, Julia & Heinisch, Dominik & Kempa, Karol & König, Johannes & Lips, Johannes & Verb, 2019. "Reporting errors and biases in published empirical findings: Evidence from innovation research," Research Policy, Elsevier, vol. 48(9), pages 1-1.
    18. Mayo, Deborah & Morey, Richard Donald, 2017. "A Poor Prognosis for the Diagnostic Screening Critique of Statistical Tests," OSF Preprints ps38b, Center for Open Science.
    19. Denes Szucs & John P A Ioannidis, 2017. "Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature," PLOS Biology, Public Library of Science, vol. 15(3), pages 1-18, March.
    20. Nicolas Vallois & Dorian Jullien, 2018. "A history of statistical methods in experimental economics," The European Journal of the History of Economic Thought, Taylor & Francis Journals, vol. 25(6), pages 1455-1492, November.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:jns:jbstat:v:236:y:2016:i:5:p:557-575. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Peter Golla (email available below). General contact details of provider: https://www.degruyter.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.