IDEAS home Printed from https://ideas.repec.org/a/plo/pone00/0127538.html
   My bibliography  Save this article

The Dilemma of Heterogeneity Tests in Meta-Analysis: A Challenge from a Simulation Study

Author

Listed:
  • Shi-jun Li
  • Hua Jiang
  • Hao Yang
  • Wei Chen
  • Jin Peng
  • Ming-wei Sun
  • Charles Damien Lu
  • Xi Peng
  • Jun Zeng

Abstract

Introduction: After several decades’ development, meta-analysis has become the pillar of evidence-based medicine. However, heterogeneity is still the threat to the validity and quality of such studies. Currently, Q and its descendant I2 (I square) tests are widely used as the tools for heterogeneity evaluation. The core mission of this kind of test is to identify data sets from similar populations and exclude those are from different populations. Although Q and I2 are used as the default tool for heterogeneity testing, the work we present here demonstrates that the robustness of these two tools is questionable. Methods and Findings: We simulated a strictly normalized population S. The simulation successfully represents randomized control trial data sets, which fits perfectly with the theoretical distribution (experimental group: p = 0.37, control group: p = 0.88). And we randomly generate research samples Si that fits the population with tiny distributions. In short, these data sets are perfect and can be seen as completely homogeneous data from the exactly same population. If Q and I2 are truly robust tools, the Q and I2 testing results on our simulated data sets should not be positive. We then synthesized these trials by using fixed model. Pooled results indicated that the mean difference (MD) corresponds highly with the true values, and the 95% confidence interval (CI) is narrow. But, when the number of trials and sample size of trials enrolled in the meta-analysis are substantially increased; the Q and I2 values also increase steadily. This result indicates that I2 and Q are only suitable for testing heterogeneity amongst small sample size trials, and are not adoptable when the sample sizes and the number of trials increase substantially. Conclusions: Every day, meta-analysis studies which contain flawed data analysis are emerging and passed on to clinical practitioners as “updated evidence”. Using this kind of evidence that contain heterogeneous data sets leads to wrong conclusion, makes chaos in clinical practice and weakens the foundation of evidence-based medicine. We suggest more strict applications of meta-analysis: it should only be applied to those synthesized trials with small sample sizes. We call upon that the tools of evidence-based medicine should keep up-to-dated with the cutting-edge technologies in data science. Clinical research data should be made available publicly when there is any relevant article published so the research community could conduct in-depth data mining, which is a better alternative for meta-analysis in many instances.

Suggested Citation

  • Shi-jun Li & Hua Jiang & Hao Yang & Wei Chen & Jin Peng & Ming-wei Sun & Charles Damien Lu & Xi Peng & Jun Zeng, 2015. "The Dilemma of Heterogeneity Tests in Meta-Analysis: A Challenge from a Simulation Study," PLOS ONE, Public Library of Science, vol. 10(5), pages 1-9, May.
  • Handle: RePEc:plo:pone00:0127538
    DOI: 10.1371/journal.pone.0127538
    as

    Download full text from publisher

    File URL: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0127538
    Download Restriction: no

    File URL: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0127538&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pone.0127538?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Jorge N Zumaeta, 2021. "Meta-Analysis of Seven Standard Experimental Paradigms Comparing Student to Non-student," Journal of Economics and Behavioral Studies, AMH International, vol. 13(2), pages 22-33.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0127538. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.