IDEAS home Printed from https://ideas.repec.org/a/spr/scient/v122y2020i2d10.1007_s11192-019-03302-w.html
   My bibliography  Save this article

F1000Prime recommended articles and their citations: an exploratory study of four journals

Author

Listed:
  • Peiling Wang

    (The University of Tennessee)

  • Joshua Williams

    (The University of Tennessee)

  • Nan Zhang

    (University of Science and Technology of China)

  • Qiang Wu

    (University of Science and Technology of China)

Abstract

This study examined F1000Prime recommended research and review articles published in Cell, JAMA: The Journal of the American Medical Association, The Lancet, and The New England Journal of Medicine (NEJM) in 2010. The analyses included (1) the classifications assigned to the articles; (2) differences in Web of Science (WoS) citation counts over 9 years between the articles with F1000Prime recommendations and the other articles of the same journal; (3) correlations between the F1000Prime rating scores and WoS citation counts; (4) scaled graphic comparisons of the two measures; (5) content analysis of the top 5 WoS cited and top 5 F1000Prime scored NEJM articles. The results show that most of the recommended articles were classified as New Finding, Clinical Trial, Conformation, Interesting Hypothesis, and Technical Advance. The top classifications differred between the medical journals (JAMA, The Lancet, and NEJM) and the biology journal (Cell); for the latter, both New Finding and Interesting Hypothesis occurred more frequently than the three medical journals. The articles recommended by F1000 Faculty members were cited significantly more than other articles of the same journal for the three medical journals, but no significance was found between the two sets of articles in Cell. The correlations between the F1000Prime rating scores and WoS citation counts of the articles in the same journal were significant for the two medical journals (The Lancet and NEJM) and the biology journal (Cell). NEJM showed significances in both the upper quantile (top 50%), and the upper quartile (top 25%) sets. One of the medical journals, JAMA, did not show any significant correlation between the two measures. Despite the significant correlations of the three journals, Min–Max scaled graphic comparisons of the two measures did not reveal any patterns for predicting citation trends by F1000Prime rating scores. The peak citation year of the articles ranged from 2 to 8 years after the publication year for NEJM. Content analysis of the top-cited and top-scored NEJM articles found that highly commendable papers with comments such as “exceptional,” “landmark study,” or “paradigm shift” received varied rating scores. In comparison, some of the results corroborate with previous studies. Further studies are suggested to include additional journals and different years as well as alternative methods. Studies are needed to understand how F1000 Faculty assign ratings and what criteria they use. In addition, it is also worth investigating how F1000Prime users perceive the meanings of the ratings.

Suggested Citation

  • Peiling Wang & Joshua Williams & Nan Zhang & Qiang Wu, 2020. "F1000Prime recommended articles and their citations: an exploratory study of four journals," Scientometrics, Springer;Akadémiai Kiadó, vol. 122(2), pages 933-955, February.
  • Handle: RePEc:spr:scient:v:122:y:2020:i:2:d:10.1007_s11192-019-03302-w
    DOI: 10.1007/s11192-019-03302-w
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s11192-019-03302-w
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s11192-019-03302-w?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Bornmann, Lutz & Leydesdorff, Loet, 2015. "Does quality and content matter for citedness? A comparison with para-textual factors and over time," Journal of Informetrics, Elsevier, vol. 9(3), pages 419-429.
    2. Robin Haunschild & Lutz Bornmann, 2018. "Field- and time-normalization of data with many zeros: an empirical analysis using citation and Twitter data," Scientometrics, Springer;Akadémiai Kiadó, vol. 116(2), pages 997-1012, August.
    3. Bornmann, Lutz & Leydesdorff, Loet & Wang, Jian, 2013. "Which percentile-based approach should be preferred for calculating normalized citation impact values? An empirical comparison of five approaches including a newly developed citation-rank approach (P1," Journal of Informetrics, Elsevier, vol. 7(4), pages 933-944.
    4. Peiling Wang & Marilyn Domas White, 1999. "A cognitive model of document use during a research project. Study II. Decisions at the reading and citing stages," Journal of the American Society for Information Science, Association for Information Science & Technology, vol. 50(2), pages 98-114.
    5. Patrizio E Tressoldi & David Giofré & Francesco Sella & Geoff Cumming, 2013. "High Impact = High Statistical Standards? Not Necessarily So," PLOS ONE, Public Library of Science, vol. 8(2), pages 1-7, February.
    6. Lutz Bornmann & Rüdiger Mutz, 2015. "Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 66(11), pages 2215-2222, November.
    7. Jian Wang, 2013. "Citation time window choice for research impact evaluation," Scientometrics, Springer;Akadémiai Kiadó, vol. 94(3), pages 851-872, March.
    8. Lutz Bornmann, 2015. "Interrater reliability and convergent validity of F1000Prime peer review," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 66(12), pages 2415-2426, December.
    9. Bornmann, Lutz & Haunschild, Robin, 2015. "Which people use which scientific papers? An evaluation of data from F1000 and Mendeley," Journal of Informetrics, Elsevier, vol. 9(3), pages 477-487.
    10. Henry Small, 2004. "Why authors think their papers are highly cited," Scientometrics, Springer;Akadémiai Kiadó, vol. 60(3), pages 305-316, August.
    11. Bornmann, Lutz & Leydesdorff, Loet, 2013. "The validation of (advanced) bibliometric indicators through peer assessments: A comparative study using data from InCites and F1000," Journal of Informetrics, Elsevier, vol. 7(2), pages 286-291.
    12. Bornmann, Lutz & Marx, Werner, 2015. "Methods for the generation of normalized citation impact scores in bibliometrics: Which method best reflects the judgements of experts?," Journal of Informetrics, Elsevier, vol. 9(2), pages 408-418.
    13. Lutz Bornmann & Robin Haunschild, 2018. "Do altmetrics correlate with the quality of papers? A large-scale empirical study based on F1000Prime data," PLOS ONE, Public Library of Science, vol. 13(5), pages 1-12, May.
    14. Ludo Waltman & Rodrigo Costas, 2014. "F1000 Recommendations as a Potential New Data Source for Research Evaluation: A Comparison With Citations," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 65(3), pages 433-445, March.
    15. Ehsan Mohammadi & Mike Thelwall, 2013. "Assessing non-standard article impact using F1000 labels," Scientometrics, Springer;Akadémiai Kiadó, vol. 97(2), pages 383-395, November.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Wang, Peiling & Su, Jing, 2021. "Post-publication expert recommendations in faculty opinions (F1000Prime): Recommended articles and citations," Journal of Informetrics, Elsevier, vol. 15(3).
    2. Shi, Xuanyu & Du, Jian, 2022. "Distinguishing transformative from incremental clinical evidence: A classifier of clinical research using textual features from abstracts and citing sentences," Journal of Informetrics, Elsevier, vol. 16(2).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Mojisola Erdt & Aarthy Nagarajan & Sei-Ching Joanna Sin & Yin-Leng Theng, 2016. "Altmetrics: an analysis of the state-of-the-art in measuring research impact on social media," Scientometrics, Springer;Akadémiai Kiadó, vol. 109(2), pages 1117-1166, November.
    2. Wang, Peiling & Su, Jing, 2021. "Post-publication expert recommendations in faculty opinions (F1000Prime): Recommended articles and citations," Journal of Informetrics, Elsevier, vol. 15(3).
    3. Bornmann, Lutz & Tekles, Alexander & Zhang, Helena H. & Ye, Fred Y., 2019. "Do we measure novelty when we analyze unusual combinations of cited references? A validation study of bibliometric novelty indicators based on F1000Prime data," Journal of Informetrics, Elsevier, vol. 13(4).
    4. Bornmann, Lutz, 2014. "Validity of altmetrics data for measuring societal impact: A study using data from Altmetric and F1000Prime," Journal of Informetrics, Elsevier, vol. 8(4), pages 935-950.
    5. Bornmann, Lutz & Marx, Werner, 2015. "Methods for the generation of normalized citation impact scores in bibliometrics: Which method best reflects the judgements of experts?," Journal of Informetrics, Elsevier, vol. 9(2), pages 408-418.
    6. Bornmann, Lutz & Leydesdorff, Loet & Wang, Jian, 2014. "How to improve the prediction based on citation impact percentiles for years shortly after the publication date?," Journal of Informetrics, Elsevier, vol. 8(1), pages 175-180.
    7. Jianhua Hou & Xiucai Yang & Yang Zhang, 2023. "The effect of social media knowledge cascade: an analysis of scientific papers diffusion," Scientometrics, Springer;Akadémiai Kiadó, vol. 128(9), pages 5169-5195, September.
    8. Thelwall, Mike & Fairclough, Ruth, 2015. "The influence of time and discipline on the magnitude of correlations between citation counts and quality scores," Journal of Informetrics, Elsevier, vol. 9(3), pages 529-541.
    9. Bornmann, Lutz & Leydesdorff, Loet, 2015. "Does quality and content matter for citedness? A comparison with para-textual factors and over time," Journal of Informetrics, Elsevier, vol. 9(3), pages 419-429.
    10. Lutz Bornmann, 2015. "Interrater reliability and convergent validity of F1000Prime peer review," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 66(12), pages 2415-2426, December.
    11. Bornmann, Lutz, 2014. "Do altmetrics point to the broader impact of research? An overview of benefits and disadvantages of altmetrics," Journal of Informetrics, Elsevier, vol. 8(4), pages 895-903.
    12. Bornmann, Lutz & Haunschild, Robin, 2016. "Normalization of Mendeley reader impact on the reader- and paper-side: A comparison of the mean discipline normalized reader score (MDNRS) with the mean normalized reader score (MNRS) and bare reader ," Journal of Informetrics, Elsevier, vol. 10(3), pages 776-788.
    13. Hou, Jianhua & Yang, Xiucai, 2020. "Social media-based sleeping beauties: Defining, identifying and features," Journal of Informetrics, Elsevier, vol. 14(2).
    14. Sergio Copiello, 2020. "Other than detecting impact in advance, alternative metrics could act as early warning signs of retractions: tentative findings of a study into the papers retracted by PLoS ONE," Scientometrics, Springer;Akadémiai Kiadó, vol. 125(3), pages 2449-2469, December.
    15. Cao, Xuanyu & Chen, Yan & Ray Liu, K.J., 2016. "A data analytic approach to quantifying scientific impact," Journal of Informetrics, Elsevier, vol. 10(2), pages 471-484.
    16. Dunaiski, Marcel & Geldenhuys, Jaco & Visser, Willem, 2019. "On the interplay between normalisation, bias, and performance of paper impact metrics," Journal of Informetrics, Elsevier, vol. 13(1), pages 270-290.
    17. Fei Shu, 2017. "Comment to: Does China need to rethink its metrics- and citation-based research rewards policies?," Scientometrics, Springer;Akadémiai Kiadó, vol. 113(2), pages 1229-1231, November.
    18. David I Stern, 2014. "High-Ranked Social Science Journal Articles Can Be Identified from Early Citation Information," PLOS ONE, Public Library of Science, vol. 9(11), pages 1-11, November.
    19. Soo Jeung Lee & Christian Schneijderberg & Yangson Kim & Isabel Steinhardt, 2021. "Have Academics’ Citation Patterns Changed in Response to the Rise of World University Rankings? A Test Using First-Citation Speeds," Sustainability, MDPI, vol. 13(17), pages 1-19, August.
    20. Yu Zhang & Min Wang & Morteza Saberi & Elizabeth Chang, 2022. "Analysing academic paper ranking algorithms using test data and benchmarks: an investigation," Scientometrics, Springer;Akadémiai Kiadó, vol. 127(7), pages 4045-4074, July.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:scient:v:122:y:2020:i:2:d:10.1007_s11192-019-03302-w. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.