IDEAS home Printed from https://ideas.repec.org/a/plo/pone00/0118446.html
   My bibliography  Save this article

Peer-Selected “Best Papers”—Are They Really That “Good”?

Author

Listed:
  • Jacques Wainer
  • Michael Eckmann
  • Anderson Rocha

Abstract

Background: Peer evaluation is the cornerstone of science evaluation. In this paper, we analyze whether or not a form of peer evaluation, the pre-publication selection of the best papers in Computer Science (CS) conferences, is better than random, when considering future citations received by the papers. Methods: Considering 12 conferences (for several years), we collected the citation counts from Scopus for both the best papers and the non-best papers. For a different set of 17 conferences, we collected the data from Google Scholar. For each data set, we computed the proportion of cases whereby the best paper has more citations. We also compare this proportion for years before 2010 and after to evaluate if there is a propaganda effect. Finally, we count the proportion of best papers that are in the top 10% and 20% most cited for each conference instance. Results: The probability that a best paper will receive more citations than a non best paper is 0.72 (95% CI = 0.66, 0.77) for the Scopus data, and 0.78 (95% CI = 0.74, 0.81) for the Scholar data. There are no significant changes in the probabilities for different years. Also, 51% of the best papers are among the top 10% most cited papers in each conference/year, and 64% of them are among the top 20% most cited. Discussion: There is strong evidence that the selection of best papers in Computer Science conferences is better than a random selection, and that a significant number of the best papers are among the top cited papers in the conference.

Suggested Citation

  • Jacques Wainer & Michael Eckmann & Anderson Rocha, 2015. "Peer-Selected “Best Papers”—Are They Really That “Good”?," PLOS ONE, Public Library of Science, vol. 10(3), pages 1-12, March.
  • Handle: RePEc:plo:pone00:0118446
    DOI: 10.1371/journal.pone.0118446
    as

    Download full text from publisher

    File URL: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0118446
    Download Restriction: no

    File URL: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0118446&type=printable
    Download Restriction: no

    References listed on IDEAS

    as
    1. Judit Bar-Ilan, 2010. "Web of Science with the Conference Proceedings Citation Indexes: the case of computer science," Scientometrics, Springer;Akadémiai Kiadó, vol. 83(3), pages 809-824, June.
    2. Lutz Bornmann & Rüdiger Mutz & Hans‐Dieter Daniel, 2008. "Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine," Journal of the American Society for Information Science and Technology, Association for Information Science & Technology, vol. 59(5), pages 830-837, March.
    3. Vieira, Elizabeth S. & Cabral, José A.S. & Gomes, José A.N.F., 2014. "How good is a model based on bibliometric indicators in predicting the final decisions made by peers?," Journal of Informetrics, Elsevier, vol. 8(2), pages 390-405.
    4. Bornmann, Lutz & Leydesdorff, Loet, 2013. "The validation of (advanced) bibliometric indicators through peer assessments: A comparative study using data from InCites and F1000," Journal of Informetrics, Elsevier, vol. 7(2), pages 286-291.
    5. Álvaro Cabezas-Clavijo & Nicolás Robinson-García & Manuel Escabias & Evaristo Jiménez-Contreras, 2013. "Reviewers’ Ratings and Bibliometric Indicators: Hand in Hand When Assessing Over Research Proposals?," PLOS ONE, Public Library of Science, vol. 8(6), pages 1-1, June.
    6. Jacques Wainer & Paula Vieira, 2013. "Correlations between bibliometrics and peer evaluation for all disciplines: the evaluation of Brazilian scientists," Scientometrics, Springer;Akadémiai Kiadó, vol. 96(2), pages 395-410, August.
    7. Ludo Waltman & Rodrigo Costas, 2014. "F1000 Recommendations as a Potential New Data Source for Research Evaluation: A Comparison With Citations," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 65(3), pages 433-445, March.
    8. Rinia, E. J. & van Leeuwen, Th. N. & van Vuren, H. G. & van Raan, A. F. J., 1998. "Comparative analysis of a set of bibliometric indicators and central peer review criteria: Evaluation of condensed matter physics in the Netherlands," Research Policy, Elsevier, vol. 27(1), pages 95-107, May.
    9. David Michayluk & Ralf Zurbruegg, 2014. "Do lead articles signal higher quality in the digital age? Evidence from finance journals," Scientometrics, Springer;Akadémiai Kiadó, vol. 98(2), pages 961-973, February.
    10. Franceschet, Massimo & Costantini, Antonio, 2011. "The first Italian research assessment exercise: A bibliometric perspective," Journal of Informetrics, Elsevier, vol. 5(2), pages 275-291.
    11. Lutz Bornmann & Hans-Dieter Daniel, 2005. "Does the h-index for ranking of scientists really work?," Scientometrics, Springer;Akadémiai Kiadó, vol. 65(3), pages 391-392, December.
    12. Alonso, S. & Cabrerizo, F.J. & Herrera-Viedma, E. & Herrera, F., 2009. "h-Index: A review focused in its variants, computation and standardization for different scientific fields," Journal of Informetrics, Elsevier, vol. 3(4), pages 273-289.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Giovanni Abramo & Ciriaco Andrea D’Angelo & Emanuela Reale, 2019. "Peer review versus bibliometrics: Which method better predicts the scholarly impact of publications?," Scientometrics, Springer;Akadémiai Kiadó, vol. 121(1), pages 537-554, October.
    2. Thelwall, Mike & Fairclough, Ruth, 2015. "The influence of time and discipline on the magnitude of correlations between citation counts and quality scores," Journal of Informetrics, Elsevier, vol. 9(3), pages 529-541.
    3. Jacques Wainer & Paula Vieira, 2013. "Correlations between bibliometrics and peer evaluation for all disciplines: the evaluation of Brazilian scientists," Scientometrics, Springer;Akadémiai Kiadó, vol. 96(2), pages 395-410, August.
    4. Deming Lin & Tianhui Gong & Wenbin Liu & Martin Meyer, 2020. "An entropy-based measure for the evolution of h index research," Scientometrics, Springer;Akadémiai Kiadó, vol. 125(3), pages 2283-2298, December.
    5. Bouyssou, Denis & Marchant, Thierry, 2014. "An axiomatic approach to bibliometric rankings and indices," Journal of Informetrics, Elsevier, vol. 8(3), pages 449-477.
    6. Dag W. Aksnes & Liv Langfeldt & Paul Wouters, 2019. "Citations, Citation Indicators, and Research Quality: An Overview of Basic Concepts and Theories," SAGE Open, , vol. 9(1), pages 21582440198, February.
    7. Mingkun Wei, 2020. "Research on impact evaluation of open access journals," Scientometrics, Springer;Akadémiai Kiadó, vol. 122(2), pages 1027-1049, February.
    8. D Gnana Bharathi, 2013. "Evaluation and Ranking of Researchers – Bh Index," PLOS ONE, Public Library of Science, vol. 8(12), pages 1-1, December.
    9. Xia Gao & Jiancheng Guan, 2012. "Network model of knowledge diffusion," Scientometrics, Springer;Akadémiai Kiadó, vol. 90(3), pages 749-762, March.
    10. Lutz Bornmann, 2015. "Interrater reliability and convergent validity of F1000Prime peer review," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 66(12), pages 2415-2426, December.
    11. Loet Leydesdorff & Lutz Bornmann & Jonathan Adams, 2019. "The integrated impact indicator revisited (I3*): a non-parametric alternative to the journal impact factor," Scientometrics, Springer;Akadémiai Kiadó, vol. 119(3), pages 1669-1694, June.
    12. Robin Haunschild & Lutz Bornmann, 2018. "Field- and time-normalization of data with many zeros: an empirical analysis using citation and Twitter data," Scientometrics, Springer;Akadémiai Kiadó, vol. 116(2), pages 997-1012, August.
    13. Marta Cardin & Marco Corazza & Stefania Funari & Silvio Giove, 2011. "A fuzzy-based scoring rule for author ranking," Working Papers 2011_11, Department of Economics, University of Venice "Ca' Foscari".
    14. Sigifredo Laengle & José M. Merigó & Nikunja Mohan Modak & Jian-Bo Yang, 2020. "Bibliometrics in operations research and management science: a university analysis," Annals of Operations Research, Springer, vol. 294(1), pages 769-813, November.
    15. Abramo, Giovanni & Cicero, Tindaro & D’Angelo, Ciriaco Andrea, 2011. "Assessing the varying level of impact measurement accuracy as a function of the citation window length," Journal of Informetrics, Elsevier, vol. 5(4), pages 659-667.
    16. Madiha Ameer & Muhammad Tanvir Afzal, 2019. "Evaluation of h-index and its qualitative and quantitative variants in Neuroscience," Scientometrics, Springer;Akadémiai Kiadó, vol. 121(2), pages 653-673, November.
    17. Bornmann, Lutz & Tekles, Alexander & Zhang, Helena H. & Ye, Fred Y., 2019. "Do we measure novelty when we analyze unusual combinations of cited references? A validation study of bibliometric novelty indicators based on F1000Prime data," Journal of Informetrics, Elsevier, vol. 13(4).
    18. Fiorenzo Franceschini & Domenico Maisano & Luca Mastrogiacomo, 2013. "The effect of database dirty data on h-index calculation," Scientometrics, Springer;Akadémiai Kiadó, vol. 95(3), pages 1179-1188, June.
    19. J. E. Hirsch, 2019. "hα: An index to quantify an individual’s scientific leadership," Scientometrics, Springer;Akadémiai Kiadó, vol. 118(2), pages 673-686, February.
    20. Giovanni Abramo & Ciriaco Andrea D’Angelo & Flavia Di Costa, 2011. "National research assessment exercises: a comparison of peer review and bibliometrics rankings," Scientometrics, Springer;Akadémiai Kiadó, vol. 89(3), pages 929-941, December.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0118446. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (plosone). General contact details of provider: https://journals.plos.org/plosone/ .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.