IDEAS home Printed from https://ideas.repec.org/a/eee/infome/v11y2017i2p530-540.html
   My bibliography  Save this article

The accuracy of confidence intervals for field normalised indicators

Author

Listed:
  • Thelwall, Mike
  • Fairclough, Ruth

Abstract

When comparing the average citation impact of research groups, universities and countries, field normalisation reduces the influence of discipline and time. Confidence intervals for these indicators can help with attempts to infer whether differences between sets of publications are due to chance factors. Although both bootstrapping and formulae have been proposed for these, their accuracy is unknown. In response, this article uses simulated data to systematically compare the accuracy of confidence limits in the simplest possible case, a single field and year. The results suggest that the MNLCS (Mean Normalised Log-transformed Citation Score) confidence interval formula is conservative for large groups but almost always safe, whereas bootstrap MNLCS confidence intervals tend to be accurate but can be unsafe for smaller world or group sample sizes. In contrast, bootstrap MNCS (Mean Normalised Citation Score) confidence intervals can be very unsafe, although their accuracy increases with sample sizes.

Suggested Citation

  • Thelwall, Mike & Fairclough, Ruth, 2017. "The accuracy of confidence intervals for field normalised indicators," Journal of Informetrics, Elsevier, vol. 11(2), pages 530-540.
  • Handle: RePEc:eee:infome:v:11:y:2017:i:2:p:530-540
    DOI: 10.1016/j.joi.2017.03.004
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S1751157717300408
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.joi.2017.03.004?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Stegehuis, Clara & Litvak, Nelly & Waltman, Ludo, 2015. "Predicting the long-term citation impact of recent publications," Journal of Informetrics, Elsevier, vol. 9(3), pages 642-657.
    2. Fairclough, Ruth & Thelwall, Mike, 2015. "National research impact indicators from Mendeley readers," Journal of Informetrics, Elsevier, vol. 9(4), pages 845-859.
    3. Vincent Larivière & Yves Gingras, 2010. "The impact factor's Matthew Effect: A natural experiment in bibliometrics," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 61(2), pages 424-427, February.
    4. Mike Thelwall & Pardeep Sud, 2016. "Mendeley readership counts: An investigation of temporal and disciplinary differences," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 67(12), pages 3036-3050, December.
    5. Didegah, Fereshteh & Thelwall, Mike, 2013. "Which factors help authors produce the highest impact research? Collaboration, journal and document properties," Journal of Informetrics, Elsevier, vol. 7(4), pages 861-873.
    6. Anne-Wil Harzing & Satu Alakangas, 2016. "Google Scholar, Scopus and the Web of Science: a longitudinal and cross-disciplinary comparison," Scientometrics, Springer;Akadémiai Kiadó, vol. 106(2), pages 787-804, February.
    7. Thelwall, Mike & Wilson, Paul, 2014. "Distributions for cited articles from individual subjects and years," Journal of Informetrics, Elsevier, vol. 8(4), pages 824-839.
    8. Pedro Albarrán & Antonio Perianes-Rodríguez & Javier Ruiz-Castillo, 2015. "Differences in citation impact across countries," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 66(3), pages 512-525, March.
    9. Waltman, Ludo & van Eck, Nees Jan & van Leeuwen, Thed N. & Visser, Martijn S. & van Raan, Anthony F.J., 2011. "Towards a new crown indicator: Some theoretical considerations," Journal of Informetrics, Elsevier, vol. 5(1), pages 37-47.
    10. Wan Jing Low & Paul Wilson & Mike Thelwall, 2016. "Stopped sum models and proposed variants for citation data," Scientometrics, Springer;Akadémiai Kiadó, vol. 107(2), pages 369-384, May.
    11. Dietmar Harhoff & Francis Narin & F. M. Scherer & Katrin Vopel, 1999. "Citation Frequency And The Value Of Patented Inventions," The Review of Economics and Statistics, MIT Press, vol. 81(3), pages 511-515, August.
    12. Thelwall, Mike, 2016. "Are there too many uncited articles? Zero inflated variants of the discretised lognormal and hooked power law distributions," Journal of Informetrics, Elsevier, vol. 10(2), pages 622-633.
    13. B Ian Hutchins & Xin Yuan & James M Anderson & George M Santangelo, 2016. "Relative Citation Ratio (RCR): A New Metric That Uses Citation Rates to Measure Influence at the Article Level," PLOS Biology, Public Library of Science, vol. 14(9), pages 1-25, September.
    14. Ludo Waltman & Clara Calero-Medina & Joost Kosten & Ed C.M. Noyons & Robert J.W. Tijssen & Nees Jan Eck & Thed N. Leeuwen & Anthony F.J. Raan & Martijn S. Visser & Paul Wouters, 2012. "The Leiden ranking 2011/2012: Data collection, indicators, and interpretation," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 63(12), pages 2419-2432, December.
    15. Thelwall, Mike, 2016. "Are the discretised lognormal and hooked power law distributions plausible for citation data?," Journal of Informetrics, Elsevier, vol. 10(2), pages 454-470.
    16. Kayvan Kousha & Mike Thelwall, 2008. "Assessing the impact of disciplinary research on teaching: An automatic analysis of online syllabuses," Journal of the American Society for Information Science and Technology, Association for Information Science & Technology, vol. 59(13), pages 2060-2069, November.
    17. Ludo Waltman & Nees Jan Eck & Thed N. Leeuwen & Martijn S. Visser & Anthony F. J. Raan, 2011. "Towards a new crown indicator: an empirical analysis," Scientometrics, Springer;Akadémiai Kiadó, vol. 87(3), pages 467-481, June.
    18. Thelwall, Mike, 2016. "Citation count distributions for large monodisciplinary journals," Journal of Informetrics, Elsevier, vol. 10(3), pages 863-874.
    19. Thelwall, Mike, 2017. "Three practical field normalised alternative indicator formulae for research evaluation," Journal of Informetrics, Elsevier, vol. 11(1), pages 128-151.
    20. Williams, Richard & Bornmann, Lutz, 2016. "Sampling issues in bibliometric analysis," Journal of Informetrics, Elsevier, vol. 10(4), pages 1225-1232.
    21. Thelwall, Mike & Wilson, Paul, 2014. "Regression for citation data: An evaluation of different methods," Journal of Informetrics, Elsevier, vol. 8(4), pages 963-971.
    22. Filippo Radicchi & Claudio Castellano, 2012. "A Reverse Engineering Approach to the Suppression of Citation Biases Reveals Universal Properties of Citation Distributions," PLOS ONE, Public Library of Science, vol. 7(3), pages 1-9, March.
    23. Aksnes, Dag W. & Schneider, Jesper W. & Gunnarsson, Magnus, 2012. "Ranking national research systems by citation indicators. A comparative analysis using whole and fractionalised counting methods," Journal of Informetrics, Elsevier, vol. 6(1), pages 36-43.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Mike Thelwall, 2020. "Female citation impact superiority 1996–2018 in six out of seven English‐speaking nations," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 71(8), pages 979-990, August.
    2. Lehmann, Robert & Wohlrabe, Klaus, 2017. "Who is the ‘Journal Grand Master’? A new ranking based on the Elo rating system," Journal of Informetrics, Elsevier, vol. 11(3), pages 800-809.
    3. Alberto Martín-Martín & Enrique Orduna-Malea & Emilio Delgado López-Cózar, 2018. "Coverage of highly-cited documents in Google Scholar, Web of Science, and Scopus: a multidisciplinary comparison," Scientometrics, Springer;Akadémiai Kiadó, vol. 116(3), pages 2175-2188, September.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Thelwall, Mike, 2017. "Three practical field normalised alternative indicator formulae for research evaluation," Journal of Informetrics, Elsevier, vol. 11(1), pages 128-151.
    2. Thelwall, Mike, 2016. "The precision of the arithmetic mean, geometric mean and percentiles for citation data: An experimental simulation modelling approach," Journal of Informetrics, Elsevier, vol. 10(1), pages 110-123.
    3. Fairclough, Ruth & Thelwall, Mike, 2015. "More precise methods for national research citation impact comparisons," Journal of Informetrics, Elsevier, vol. 9(4), pages 895-906.
    4. Waltman, Ludo, 2016. "A review of the literature on citation impact indicators," Journal of Informetrics, Elsevier, vol. 10(2), pages 365-391.
    5. Thelwall, Mike, 2016. "Are there too many uncited articles? Zero inflated variants of the discretised lognormal and hooked power law distributions," Journal of Informetrics, Elsevier, vol. 10(2), pages 622-633.
    6. Thelwall, Mike, 2016. "Are the discretised lognormal and hooked power law distributions plausible for citation data?," Journal of Informetrics, Elsevier, vol. 10(2), pages 454-470.
    7. Vîiu, Gabriel-Alexandru, 2018. "The lognormal distribution explains the remarkable pattern documented by characteristic scores and scales in scientometrics," Journal of Informetrics, Elsevier, vol. 12(2), pages 401-415.
    8. Bornmann, Lutz & Haunschild, Robin, 2018. "Normalization of zero-inflated data: An empirical analysis of a new indicator family and its use with altmetrics data," Journal of Informetrics, Elsevier, vol. 12(3), pages 998-1011.
    9. Thelwall, Mike & Sud, Pardeep, 2016. "National, disciplinary and temporal variations in the extent to which articles with more authors have more impact: Evidence from a geometric field normalised citation indicator," Journal of Informetrics, Elsevier, vol. 10(1), pages 48-61.
    10. Thelwall, Mike, 2016. "Citation count distributions for large monodisciplinary journals," Journal of Informetrics, Elsevier, vol. 10(3), pages 863-874.
    11. Wang, Xing & Zhang, Zhihui, 2020. "Improving the reliability of short-term citation impact indicators by taking into account the correlation between short- and long-term citation impact," Journal of Informetrics, Elsevier, vol. 14(2).
    12. Thelwall, Mike, 2016. "The discretised lognormal and hooked power law distributions for complete citation data: Best options for modelling and regression," Journal of Informetrics, Elsevier, vol. 10(2), pages 336-346.
    13. Bornmann, Lutz, 2019. "Does the normalized citation impact of universities profit from certain properties of their published documents – such as the number of authors and the impact factor of the publishing journals? A mult," Journal of Informetrics, Elsevier, vol. 13(1), pages 170-184.
    14. Mingers, John & Leydesdorff, Loet, 2015. "A review of theory and practice in scientometrics," European Journal of Operational Research, Elsevier, vol. 246(1), pages 1-19.
    15. Mike Thelwall, 2019. "The influence of highly cited papers on field normalised indicators," Scientometrics, Springer;Akadémiai Kiadó, vol. 118(2), pages 519-537, February.
    16. Mike Thelwall & Kayvan Kousha & Mahshid Abdoli, 2017. "Is medical research informing professional practice more highly cited? Evidence from AHFS DI Essentials in drugs.com," Scientometrics, Springer;Akadémiai Kiadó, vol. 112(1), pages 509-527, July.
    17. Bouyssou, Denis & Marchant, Thierry, 2016. "Ranking authors using fractional counting of citations: An axiomatic approach," Journal of Informetrics, Elsevier, vol. 10(1), pages 183-199.
    18. Zahedi, Zohreh & Haustein, Stefanie, 2018. "On the relationships between bibliographic characteristics of scientific documents and citation and Mendeley readership counts: A large-scale analysis of Web of Science publications," Journal of Informetrics, Elsevier, vol. 12(1), pages 191-202.
    19. Bornmann, Lutz & Williams, Richard, 2017. "Can the journal impact factor be used as a criterion for the selection of junior researchers? A large-scale empirical study based on ResearcherID data," Journal of Informetrics, Elsevier, vol. 11(3), pages 788-799.
    20. Dunaiski, Marcel & Geldenhuys, Jaco & Visser, Willem, 2019. "On the interplay between normalisation, bias, and performance of paper impact metrics," Journal of Informetrics, Elsevier, vol. 13(1), pages 270-290.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:infome:v:11:y:2017:i:2:p:530-540. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/joi .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.