IDEAS home Printed from https://ideas.repec.org/a/eee/infome/v10y2016i3p776-788.html
   My bibliography  Save this article

Normalization of Mendeley reader impact on the reader- and paper-side: A comparison of the mean discipline normalized reader score (MDNRS) with the mean normalized reader score (MNRS) and bare reader counts

Author

Listed:
  • Bornmann, Lutz
  • Haunschild, Robin

Abstract

For the normalization of citation counts, two different kinds of methods are possible and used in bibliometrics: the cited-side and citing-side normalizations both of which can also be applied in the normalization of “Mendeley reader counts”. Haunschild and Bornmann (2016a) introduced the paper-side normalization of reader counts (mean normalized reader score, MNRS) which is an adaptation of the cited-side normalization. Since the calculation of the MNRS needs further data besides data from Mendeley (a field-classification scheme, such as the Web of Science subject categories), we introduce here the reader-side normalization of reader counts which is an adaptation of the citing-side normalization and does not need further data from other sources, because self-assigned Mendeley disciplines are used. In this study, all articles and reviews of the Web of Science core collection with publication year 2012 (and a DOI) are used to normalize their Mendeley reader counts. The newly proposed indicator (mean discipline normalized reader score, MDNRS) is obtained, compared with the MNRS and bare reader counts, and studied theoretically and empirically. We find that: (i) normalization of Mendeley reader counts is necessary, (ii) the MDNRS is able to normalize Mendeley reader counts in several disciplines, and (iii) the MNRS is able to normalize Mendeley reader counts in all disciplines. This generally favorable result for the MNRS in all disciplines leads to the recommendation to prefer the MNRS over the MDNRS—provided that the user has an external field-classification scheme at hand.

Suggested Citation

  • Bornmann, Lutz & Haunschild, Robin, 2016. "Normalization of Mendeley reader impact on the reader- and paper-side: A comparison of the mean discipline normalized reader score (MDNRS) with the mean normalized reader score (MNRS) and bare reader ," Journal of Informetrics, Elsevier, vol. 10(3), pages 776-788.
  • Handle: RePEc:eee:infome:v:10:y:2016:i:3:p:776-788
    DOI: 10.1016/j.joi.2016.04.015
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S1751157715302042
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.joi.2016.04.015?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Lutz Bornmann & Felix Moya Anegón & Rüdiger Mutz, 2013. "Do universities or research institutions with a specific subject profile have an advantage or a disadvantage in institutional rankings?," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 64(11), pages 2310-2316, November.
    2. Loet Leydesdorff & Tobias Opthof, 2010. "Scopus's source normalized impact per paper (SNIP) versus a journal impact factor based on fractional counting of citations," Journal of the American Society for Information Science and Technology, Association for Information Science & Technology, vol. 61(11), pages 2365-2369, November.
    3. Fairclough, Ruth & Thelwall, Mike, 2015. "National research impact indicators from Mendeley readers," Journal of Informetrics, Elsevier, vol. 9(4), pages 845-859.
    4. Ludo Waltman & Nees Jan Eck, 2012. "A new methodology for constructing a publication-level classification system of science," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 63(12), pages 2378-2392, December.
    5. Loet Leydesdorff & Lutz Bornmann, 2011. "How fractional counting of citations affects the impact factor: Normalization in terms of differences in citation potentials among fields of science," Journal of the American Society for Information Science and Technology, Association for Information Science & Technology, vol. 62(2), pages 217-229, February.
    6. Lutz Bornmann & Werner Marx, 2014. "The wisdom of citing scientists," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 65(6), pages 1288-1292, June.
    7. Waltman, Ludo & van Eck, Nees Jan, 2013. "A systematic empirical comparison of different approaches for normalizing citation impact indicators," Journal of Informetrics, Elsevier, vol. 7(4), pages 833-849.
    8. Waltman, Ludo & van Eck, Nees Jan & van Leeuwen, Thed N. & Visser, Martijn S. & van Raan, Anthony F.J., 2011. "Towards a new crown indicator: Some theoretical considerations," Journal of Informetrics, Elsevier, vol. 5(1), pages 37-47.
    9. Waltman, L. & van Eck, N.J.P., 2009. "A Taxonomy of Bibliometric Performance Indicators Based on the Property of Consistency," ERIM Report Series Research in Management ERS-2009-014-LIS, Erasmus Research Institute of Management (ERIM), ERIM is the joint research institute of the Rotterdam School of Management, Erasmus University and the Erasmus School of Economics (ESE) at Erasmus University Rotterdam.
    10. Haunschild, Robin & Bornmann, Lutz, 2016. "Normalization of Mendeley reader counts for impact assessment," Journal of Informetrics, Elsevier, vol. 10(1), pages 62-73.
    11. Loet Leydesdorff & Jung C. Shin, 2011. "How to evaluate universities in terms of their relative citation impacts: Fractional counting of citations and the normalization of differences among disciplines," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 62(6), pages 1146-1155, June.
    12. Michel Zitt & Henry Small, 2008. "Modifying the journal impact factor by fractional citation weighting: The audience factor," Journal of the American Society for Information Science and Technology, Association for Information Science & Technology, vol. 59(11), pages 1856-1860, September.
    13. Bornmann, Lutz & Haunschild, Robin, 2015. "Which people use which scientific papers? An evaluation of data from F1000 and Mendeley," Journal of Informetrics, Elsevier, vol. 9(3), pages 477-487.
    14. Ludo Waltman & Nees Jan Eck, 2013. "Source normalized indicators of citation impact: an overview of different approaches and an empirical comparison," Scientometrics, Springer;Akadémiai Kiadó, vol. 96(3), pages 699-716, September.
    15. Bornmann, Lutz, 2014. "Do altmetrics point to the broader impact of research? An overview of benefits and disadvantages of altmetrics," Journal of Informetrics, Elsevier, vol. 8(4), pages 895-903.
    16. Loet Leydesdorff & Filippo Radicchi & Lutz Bornmann & Claudio Castellano & Wouter Nooy, 2013. "Field-normalized impact factors (IFs): A comparison of rescaling and fractionally counted IFs," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 64(11), pages 2299-2309, November.
    17. Ludo Waltman & Nees Jan van Eck, 2012. "A new methodology for constructing a publication‐level classification system of science," Journal of the American Society for Information Science and Technology, Association for Information Science & Technology, vol. 63(12), pages 2378-2392, December.
    18. Thed van Leeuwen, 2006. "The application of bibliometric analyses in the evaluation of social science research. Who benefits from it, and why it is still feasible," Scientometrics, Springer;Akadémiai Kiadó, vol. 66(1), pages 133-154, January.
    19. Bornmann, Lutz & Leydesdorff, Loet, 2013. "The validation of (advanced) bibliometric indicators through peer assessments: A comparative study using data from InCites and F1000," Journal of Informetrics, Elsevier, vol. 7(2), pages 286-291.
    20. Bornmann, Lutz & Marx, Werner, 2015. "Methods for the generation of normalized citation impact scores in bibliometrics: Which method best reflects the judgements of experts?," Journal of Informetrics, Elsevier, vol. 9(2), pages 408-418.
    21. Michel Zitt & Suzy Ramanana-Rahary & Elise Bassecoulard, 2005. "Relativity of citation performance and excellence measures: From cross-field to cross-scale effects of field-normalisation," Scientometrics, Springer;Akadémiai Kiadó, vol. 63(2), pages 373-401, April.
    22. Loet Leydesdorff & Ping Zhou & Lutz Bornmann, 2013. "How can journal impact factors be normalized across fields of science? An assessment in terms of percentile ranks and fractional counts," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 64(1), pages 96-107, January.
    23. Bornmann, Lutz, 2014. "Validity of altmetrics data for measuring societal impact: A study using data from Altmetric and F1000Prime," Journal of Informetrics, Elsevier, vol. 8(4), pages 935-950.
    24. Ehsan Mohammadi & Mike Thelwall & Kayvan Kousha, 2016. "Can Mendeley bookmarks reflect readership? A survey of user motivations," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 67(5), pages 1198-1209, May.
    25. Wei Jeng & Daqing He & Jiepu Jiang, 2015. "User participation in an academic social networking service: A survey of open group users on Mendeley," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 66(5), pages 890-904, May.
    26. Lutz Bornmann & Felix de Moya Anegón & Rüdiger Mutz, 2013. "Do Universities or Research Institutions With a Specific Subject Profile Have an Advantage or a Disadvantage in Institutional Rankings? A Latent Class Analysis With Data From the SCImago Ranking," Journal of the American Society for Information Science and Technology, Association for Information Science & Technology, vol. 64(11), pages 2310-2316, November.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Robin Haunschild & Lutz Bornmann, 2018. "Field- and time-normalization of data with many zeros: an empirical analysis using citation and Twitter data," Scientometrics, Springer;Akadémiai Kiadó, vol. 116(2), pages 997-1012, August.
    2. Thelwall, Mike, 2017. "Three practical field normalised alternative indicator formulae for research evaluation," Journal of Informetrics, Elsevier, vol. 11(1), pages 128-151.
    3. Lutz Bornmann & Rüdiger Mutz & Robin Haunschild & Felix Moya-Anegon & Mirko Almeida Madeira Clemente & Moritz Stefaner, 2021. "Mapping the impact of papers on various status groups in excellencemapping.net: a new release of the excellence mapping tool based on citation and reader scores," Scientometrics, Springer;Akadémiai Kiadó, vol. 126(11), pages 9305-9331, November.
    4. Lutz Bornmann & Robin Haunschild, 2016. "How to normalize Twitter counts? A first attempt based on journals in the Twitter Index," Scientometrics, Springer;Akadémiai Kiadó, vol. 107(3), pages 1405-1422, June.
    5. Zahedi, Zohreh & Haustein, Stefanie, 2018. "On the relationships between bibliographic characteristics of scientific documents and citation and Mendeley readership counts: A large-scale analysis of Web of Science publications," Journal of Informetrics, Elsevier, vol. 12(1), pages 191-202.
    6. Bornmann, Lutz & Haunschild, Robin, 2016. "Citation score normalized by cited references (CSNCR): The introduction of a new citation impact indicator," Journal of Informetrics, Elsevier, vol. 10(3), pages 875-887.
    7. Bornmann, Lutz & Haunschild, Robin, 2018. "Normalization of zero-inflated data: An empirical analysis of a new indicator family and its use with altmetrics data," Journal of Informetrics, Elsevier, vol. 12(3), pages 998-1011.
    8. Bornmann, Lutz & Marx, Werner, 2018. "Critical rationalism and the search for standard (field-normalized) indicators in bibliometrics," Journal of Informetrics, Elsevier, vol. 12(3), pages 598-604.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Bornmann, Lutz & Haunschild, Robin, 2016. "Citation score normalized by cited references (CSNCR): The introduction of a new citation impact indicator," Journal of Informetrics, Elsevier, vol. 10(3), pages 875-887.
    2. Waltman, Ludo, 2016. "A review of the literature on citation impact indicators," Journal of Informetrics, Elsevier, vol. 10(2), pages 365-391.
    3. Bouyssou, Denis & Marchant, Thierry, 2016. "Ranking authors using fractional counting of citations: An axiomatic approach," Journal of Informetrics, Elsevier, vol. 10(1), pages 183-199.
    4. Waltman, Ludo & van Eck, Nees Jan, 2013. "A systematic empirical comparison of different approaches for normalizing citation impact indicators," Journal of Informetrics, Elsevier, vol. 7(4), pages 833-849.
    5. Mingers, John & Leydesdorff, Loet, 2015. "A review of theory and practice in scientometrics," European Journal of Operational Research, Elsevier, vol. 246(1), pages 1-19.
    6. Bornmann, Lutz & Marx, Werner, 2015. "Methods for the generation of normalized citation impact scores in bibliometrics: Which method best reflects the judgements of experts?," Journal of Informetrics, Elsevier, vol. 9(2), pages 408-418.
    7. Cristian Colliander & Per Ahlgren, 2019. "Comparison of publication-level approaches to ex-post citation normalization," Scientometrics, Springer;Akadémiai Kiadó, vol. 120(1), pages 283-300, July.
    8. Liwei Cai & Jiahao Tian & Jiaying Liu & Xiaomei Bai & Ivan Lee & Xiangjie Kong & Feng Xia, 2019. "Scholarly impact assessment: a survey of citation weighting solutions," Scientometrics, Springer;Akadémiai Kiadó, vol. 118(2), pages 453-478, February.
    9. Cristiano Varin & Manuela Cattelan & David Firth, 2016. "Statistical modelling of citation exchange between statistics journals," Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 179(1), pages 1-63, January.
    10. Bornmann, Lutz & Haunschild, Robin & Mutz, Rüdiger, 2020. "Should citations be field-normalized in evaluative bibliometrics? An empirical analysis based on propensity score matching," Journal of Informetrics, Elsevier, vol. 14(4).
    11. Mojisola Erdt & Aarthy Nagarajan & Sei-Ching Joanna Sin & Yin-Leng Theng, 2016. "Altmetrics: an analysis of the state-of-the-art in measuring research impact on social media," Scientometrics, Springer;Akadémiai Kiadó, vol. 109(2), pages 1117-1166, November.
    12. Bornmann, Lutz & Marx, Werner, 2018. "Critical rationalism and the search for standard (field-normalized) indicators in bibliometrics," Journal of Informetrics, Elsevier, vol. 12(3), pages 598-604.
    13. Li, Yunrong & Ruiz-Castillo, Javier, 2013. "The comparison of normalization procedures based on different classification systems," Journal of Informetrics, Elsevier, vol. 7(4), pages 945-958.
    14. Lutz Bornmann & Alexander Tekles & Loet Leydesdorff, 2019. "How well does I3 perform for impact measurement compared to other bibliometric indicators? The convergent validity of several (field-normalized) indicators," Scientometrics, Springer;Akadémiai Kiadó, vol. 119(2), pages 1187-1205, May.
    15. Tolga Yuret, 2018. "Author-weighted impact factor and reference return ratio: can we attain more equality among fields?," Scientometrics, Springer;Akadémiai Kiadó, vol. 116(3), pages 2097-2111, September.
    16. Haunschild, Robin & Bornmann, Lutz, 2016. "Normalization of Mendeley reader counts for impact assessment," Journal of Informetrics, Elsevier, vol. 10(1), pages 62-73.
    17. Ahlgren, Per & Waltman, Ludo, 2014. "The correlation between citation-based and expert-based assessments of publication channels: SNIP and SJR vs. Norwegian quality assessments," Journal of Informetrics, Elsevier, vol. 8(4), pages 985-996.
    18. Bornmann, Lutz & Haunschild, Robin, 2018. "Normalization of zero-inflated data: An empirical analysis of a new indicator family and its use with altmetrics data," Journal of Informetrics, Elsevier, vol. 12(3), pages 998-1011.
    19. Lutz Bornmann & Robin Haunschild, 2016. "How to normalize Twitter counts? A first attempt based on journals in the Twitter Index," Scientometrics, Springer;Akadémiai Kiadó, vol. 107(3), pages 1405-1422, June.
    20. Ruiz-Castillo, Javier & Waltman, Ludo, 2015. "Field-normalized citation impact indicators using algorithmically constructed classification systems of science," Journal of Informetrics, Elsevier, vol. 9(1), pages 102-117.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:infome:v:10:y:2016:i:3:p:776-788. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/joi .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.