IDEAS home Printed from https://ideas.repec.org/p/osf/metaar/eqhjd.html
   My bibliography  Save this paper

Dissertation R.C.M. van Aert

Author

Listed:
  • van Aert, Robbie Cornelis Maria

Abstract

More and more scientific research gets published nowadays, asking for statistical methods that enable researchers to get an overview of the literature in a particular research field. For that purpose, meta-analysis methods were developed that can be used for statistically combining the effect sizes from independent primary studies on the same topic. My dissertation focuses on two issues that are crucial when conducting a meta-analysis: publication bias and heterogeneity in primary studies’ true effect sizes. Accurate estimation of both the meta-analytic effect size as well as the between-study variance in true effect size is crucial since the results of meta-analyses are often used for policy making. Publication bias distorts the results of a meta-analysis since it refers to situations where publication of a primary study depends on its results. We developed new meta-analysis methods, p-uniform and p-uniform*, which estimate effect sizes corrected for publication bias and also test for publication bias. Although the methods perform well in many conditions, these and the other existing methods are shown not to perform well when researchers use questionable research practices. Additionally, when publication bias is absent or limited, traditional methods that do not correct for publication bias outperform p¬-uniform and p-uniform*. Surprisingly, we found no strong evidence for the presence of publication bias in our pre-registered study on the presence of publication bias in a large-scale data set consisting of 83 meta-analyses and 499 systematic reviews published in the fields of psychology and medicine. We also developed two methods for meta-analyzing a statistically significant published original study and a replication of that study, which reflects a situation often encountered by researchers. One method is a frequentist whereas the other method is a Bayesian statistical method. Both methods are shown to perform better than traditional meta-analytic methods that do not take the statistical significance of the original study into account. Analytical studies of both methods also show that sometimes the original study is better discarded for optimal estimation of the true effect size. Finally, we developed a program for determining the required sample size in a replication analogous to power analysis in null hypothesis testing. Computing the required sample size with the method revealed that large sample sizes (approximately 650 participants) are required to be able to distinguish a zero from a small true effect. Finally, in the last two chapters we derived a new multi-step estimator for the between-study variance in primary studies’ true effect sizes, and examined the statistical properties of two methods (Q-profile and generalized Q-statistic method) to compute the confidence interval of the between-study variance in true effect size. We proved that the multi-step estimator converges to the Paule-Mandel estimator which is nowadays one of the recommended methods to estimate the between-study variance in true effect sizes. Two Monte-Carlo simulation studies showed that the coverage probabilities of Q-profile and generalized Q-statistic method can be substantially below the nominal coverage rate if the assumptions underlying the random-effects meta-analysis model were violated.

Suggested Citation

  • van Aert, Robbie Cornelis Maria, 2018. "Dissertation R.C.M. van Aert," MetaArXiv eqhjd, Center for Open Science.
  • Handle: RePEc:osf:metaar:eqhjd
    DOI: 10.31219/osf.io/eqhjd
    as

    Download full text from publisher

    File URL: https://osf.io/download/5b1686d6a291c4000d3ac3e0/
    Download Restriction: no

    File URL: https://libkey.io/10.31219/osf.io/eqhjd?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Marcel A L M van Assen & Robbie C M van Aert & Michèle B Nuijten & Jelte M Wicherts, 2014. "Why Publishing Everything Is More Effective than Selective Publishing of Statistically Significant Results," PLOS ONE, Public Library of Science, vol. 9(1), pages 1-5, January.
    2. John P A Ioannidis, 2005. "Why Most Published Research Findings Are False," PLOS Medicine, Public Library of Science, vol. 2(8), pages 1-1, August.
    3. repec:cup:judgdm:v:6:y:2011:i:8:p:870-881 is not listed on IDEAS
    4. Gerber, Alan S. & Green, Donald P. & Nickerson, David, 2001. "Testing for Publication Bias in Political Science," Political Analysis, Cambridge University Press, vol. 9(4), pages 385-392, January.
    5. Robbie C M van Aert & Marcel A L M van Assen, 2017. "Bayesian evaluation of effect size after replicating an original study," PLOS ONE, Public Library of Science, vol. 12(4), pages 1-23, April.
    6. Joseph Henrich & Steve J. Heine & Ara Norenzayan, 2010. "The Weirdest People in the World?," RatSWD Working Papers 139, German Data Forum (RatSWD).
    7. Daniele Fanelli, 2012. "Negative results are disappearing from most disciplines and countries," Scientometrics, Springer;Akadémiai Kiadó, vol. 90(3), pages 891-904, March.
    8. Larose, Daniel T. & Dey, Dipak K., 1998. "Modeling publication bias using weighted distributions in a Bayesian framework," Computational Statistics & Data Analysis, Elsevier, vol. 26(3), pages 279-302, January.
    9. Ben Goldacre, 2016. "Make journals report clinical trials properly," Nature, Nature, vol. 530(7588), pages 7-7, February.
    10. Jack Vevea & Larry Hedges, 1995. "A general linear model for estimating effect size in the presence of publication bias," Psychometrika, Springer;The Psychometric Society, vol. 60(3), pages 419-435, September.
    11. J. Copas, 1999. "What works?: selectivity models and meta‐analysis," Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 162(1), pages 95-109.
    12. C. Glenn Begley & Lee M. Ellis, 2012. "Raise standards for preclinical cancer research," Nature, Nature, vol. 483(7391), pages 531-533, March.
    13. Viechtbauer, Wolfgang, 2010. "Conducting Meta-Analyses in R with the metafor Package," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 36(i03).
    14. R. W. Farebrother, 1984. "The Distribution of a Positive Linear Combination of X2 Random Variables," Journal of the Royal Statistical Society Series C, Royal Statistical Society, vol. 33(3), pages 332-339, November.
    15. J. B. Copas & H. G. Li, 1997. "Inference for Non‐random Samples," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 59(1), pages 55-95.
    16. Anton Kühberger & Astrid Fritz & Thomas Scherndl, 2014. "Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size," PLOS ONE, Public Library of Science, vol. 9(9), pages 1-8, September.
    17. Larry V. Hedges & Jack L. Vevea, 1996. "Estimating Effect Size Under Publication Bias: Small Sample Properties and Robustness of a Random Effects Selection Model," Journal of Educational and Behavioral Statistics, , vol. 21(4), pages 299-332, December.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. van Aert, Robbie Cornelis Maria & van Assen, Marcel A. L. M., 2018. "P-uniform," MetaArXiv zqjr9, Center for Open Science.
    2. Oliver Braganza, 2020. "A simple model suggesting economically rational sample-size choice drives irreproducibility," PLOS ONE, Public Library of Science, vol. 15(3), pages 1-19, March.
    3. Robbie C M van Aert & Jelte M Wicherts & Marcel A L M van Assen, 2019. "Publication bias examined in meta-analyses from psychology and medicine: A meta-meta-analysis," PLOS ONE, Public Library of Science, vol. 14(4), pages 1-32, April.
    4. Furukawa, Chishio, 2019. "Publication Bias under Aggregation Frictions: Theory, Evidence, and a New Correction Method," EconStor Preprints 194798, ZBW - Leibniz Information Centre for Economics.
    5. Bernhard Voelkl & Lucile Vogt & Emily S Sena & Hanno Würbel, 2018. "Reproducibility of preclinical animal research improves with heterogeneity of study samples," PLOS Biology, Public Library of Science, vol. 16(2), pages 1-13, February.
    6. Mark D Lindner & Richard K Nakamura, 2015. "Examining the Predictive Validity of NIH Peer Review Scores," PLOS ONE, Public Library of Science, vol. 10(6), pages 1-12, June.
    7. Irsova, Zuzana & Bom, Pedro Ricardo Duarte & Havranek, Tomas & Rachinger, Heiko, 2023. "Spurious Precision in Meta-Analysis," MetaArXiv 3qp2w, Center for Open Science.
    8. Klaus E Meyer & Arjen Witteloostuijn & Sjoerd Beugelsdijk, 2017. "What’s in a p? Reassessing best practices for conducting and reporting hypothesis-testing research," Journal of International Business Studies, Palgrave Macmillan;Academy of International Business, vol. 48(5), pages 535-551, July.
    9. Karin Langenkamp & Bodo Rödel & Kerstin Taufenbach & Meike Weiland, 2018. "Open Access in Vocational Education and Training Research," Publications, MDPI, vol. 6(3), pages 1-12, July.
    10. Augusteijn, Hilde Elisabeth Maria & van Aert, Robbie Cornelis Maria & van Assen, Marcel A. L. M., 2021. "Posterior Probabilities of Effect Sizes and Heterogeneity in Meta-Analysis: An Intuitive Approach of Dealing with Publication Bias," OSF Preprints avkgj, Center for Open Science.
    11. Dwight C. K. Tse & Jeanne Nakamura & Mihaly Csikszentmihalyi, 2022. "Flow Experiences Across Adulthood: Preliminary Findings on the Continuity Hypothesis," Journal of Happiness Studies, Springer, vol. 23(6), pages 2517-2540, August.
    12. Colin F. Camerer & Anna Dreber & Felix Holzmeister & Teck-Hua Ho & Jürgen Huber & Magnus Johannesson & Michael Kirchler & Gideon Nave & Brian A. Nosek & Thomas Pfeiffer & Adam Altmejd & Nick Buttrick , 2018. "Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015," Nature Human Behaviour, Nature, vol. 2(9), pages 637-644, September.
    13. Brian Fabo & Martina Jancokova & Elisabeth Kempf & Lubos Pastor, 2020. "Fifty Shades of QE: Conflicts of Interest in Economic Research," Working and Discussion Papers WP 5/2020, Research Department, National Bank of Slovakia.
    14. Bettina Bert & Céline Heinl & Justyna Chmielewska & Franziska Schwarz & Barbara Grune & Andreas Hensel & Matthias Greiner & Gilbert Schönfelder, 2019. "Refining animal research: The Animal Study Registry," PLOS Biology, Public Library of Science, vol. 17(10), pages 1-12, October.
    15. Christian Heise & Joshua M. Pearce, 2020. "From Open Access to Open Science: The Path From Scientific Reality to Open Scientific Communication," SAGE Open, , vol. 10(2), pages 21582440209, May.
    16. Hajko, Vladimír, 2017. "The failure of Energy-Economy Nexus: A meta-analysis of 104 studies," Energy, Elsevier, vol. 125(C), pages 771-787.
    17. Eszter Czibor & David Jimenez‐Gomez & John A. List, 2019. "The Dozen Things Experimental Economists Should Do (More of)," Southern Economic Journal, John Wiley & Sons, vol. 86(2), pages 371-432, October.
    18. Marlo M Vernon & E Andrew Balas & Shaher Momani, 2018. "Are university rankings useful to improve research? A systematic review," PLOS ONE, Public Library of Science, vol. 13(3), pages 1-15, March.
    19. Maier, Maximilian & VanderWeele, Tyler & Mathur, Maya B, 2021. "Using Selection Models to Assess Sensitivity to Publication Bias: A Tutorial and Call for More Routine Use," MetaArXiv tp45u, Center for Open Science.
    20. Mueller-Langer, Frank & Fecher, Benedikt & Harhoff, Dietmar & Wagner, Gert G., 2019. "Replication studies in economics—How many and which papers are chosen for replication, and why?," Research Policy, Elsevier, vol. 48(1), pages 62-83.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:osf:metaar:eqhjd. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: OSF (email available below). General contact details of provider: https://osf.io/preprints/metaarxiv .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.