IDEAS home Printed from https://ideas.repec.org/a/eee/infome/v11y2017i2p541-552.html
   My bibliography  Save this article

On evaluating the quality of a computer science/computer engineering conference

Author

Listed:
  • Loizides, Orestis-Stavros
  • Koutsakis, Polychronis

Abstract

The Peer Reputation (PR) metric was recently proposed in the literature, in order to judge a researcher’s contribution through the quality of the venue in which the researcher’s work is published. PR, proposed by Nelakuditi et al., ties the selectivity of a publication venue with the reputation of the first author’s institution. By computing PR for a percentage of the papers accepted in a conference or journal, a more solid indicator of a venue’s selectivity than the paper Acceptance Ratio (AR) can be derived. In recent work we explained the reasons for which we agree that PR offers substantial information that is missing from AR, however we also pointed out several limitations of the metric. These limitations make PR inadequate, if used only on its own, to give a solid evaluation of a researcher’s contribution. In this work, we present our own approach for judging the quality of a Computer Science/Computer Engineering conference venue, and thus, implicitly, the potential quality of a paper accepted in that conference. Driven by our previous findings on the adequacy of PR, as well as our belief that an institution does not necessarily “make” a researcher, we propose a Conference Classification Approach (CCA) that takes into account a number of metrics and factors, in addition to PR. These are the paper’s impact and the authors’ h-indexes. We present and discuss our results, based on data gathered from close to 3000 papers from 12 top-tier Computer Science/Computer Engineering conferences belonging to different research fields. In order to evaluate CCA, we compare our conference rankings against multiple publicly available rankings based on evaluations from the Computer Science/Computer Engineering community, and we show that our approach achieves a very comparable classification.

Suggested Citation

  • Loizides, Orestis-Stavros & Koutsakis, Polychronis, 2017. "On evaluating the quality of a computer science/computer engineering conference," Journal of Informetrics, Elsevier, vol. 11(2), pages 541-552.
  • Handle: RePEc:eee:infome:v:11:y:2017:i:2:p:541-552
    DOI: 10.1016/j.joi.2017.03.008
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S1751157716301808
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.joi.2017.03.008?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to

    for a different version of it.

    References listed on IDEAS

    as
    1. Abramo, Giovanni & D’Angelo, Ciriaco Andrea, 2016. "A farewell to the MNCS and like size-independent indicators," Journal of Informetrics, Elsevier, vol. 10(2), pages 646-651.
    2. Peep Küngas & Siim Karus & Svitlana Vakulenko & Marlon Dumas & Cristhian Parra & Fabio Casati, 2013. "Reverse-engineering conference rankings: what does it take to make a reputable conference?," Scientometrics, Springer;Akadémiai Kiadó, vol. 96(2), pages 651-665, August.
    3. Anne-Wil Harzing, 2014. "A longitudinal study of Google Scholar coverage between 2012 and 2013," Scientometrics, Springer;Akadémiai Kiadó, vol. 98(1), pages 565-575, January.
    4. Lokman I. Meho & Kiduk Yang, 2007. "Impact of data sources on citation counts and rankings of LIS faculty: Web of science versus scopus and google scholar," Journal of the American Society for Information Science and Technology, Association for Information Science & Technology, vol. 58(13), pages 2105-2125, November.
    5. Vinkler, Péter, 2012. "The case of scientometricians with the “absolute relative” impact indicator," Journal of Informetrics, Elsevier, vol. 6(2), pages 254-264.
    6. James Hartley, 2012. "To cite or not to cite: author self-citations and the impact factor," Scientometrics, Springer;Akadémiai Kiadó, vol. 92(2), pages 313-317, August.
    7. Wolfgang Glänzel & András Schubert, 2003. "A new classification scheme of science fields and subfields designed for scientometric evaluation purposes," Scientometrics, Springer;Akadémiai Kiadó, vol. 56(3), pages 357-367, March.
    8. Waltman, Ludo, 2016. "A review of the literature on citation impact indicators," Journal of Informetrics, Elsevier, vol. 10(2), pages 365-391.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Danielle H. Lee, 2019. "Predictive power of conference-related factors on citation rates of conference papers," Scientometrics, Springer;Akadémiai Kiadó, vol. 118(1), pages 281-304, January.
    2. Xiancheng Li & Wenge Rong & Haoran Shi & Jie Tang & Zhang Xiong, 2018. "The impact of conference ranking systems in computer science: a comparative regression analysis," Scientometrics, Springer;Akadémiai Kiadó, vol. 116(2), pages 879-907, August.
    3. Meho, Lokman I., 2019. "Using Scopus’s CiteScore for assessing the quality of computer science conferences," Journal of Informetrics, Elsevier, vol. 13(1), pages 419-433.
    4. Wu, Dengsheng & Wang, Shuwen & Xu, Weixuan & Li, Jianping, 2024. "Do conference-journal articles receive more citations? A case study in physics," Journal of Informetrics, Elsevier, vol. 18(4).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Waltman, Ludo, 2016. "A review of the literature on citation impact indicators," Journal of Informetrics, Elsevier, vol. 10(2), pages 365-391.
    2. Loet Leydesdorff & Paul Wouters & Lutz Bornmann, 2016. "Professional and citizen bibliometrics: complementarities and ambivalences in the development and use of indicators—a state-of-the-art report," Scientometrics, Springer;Akadémiai Kiadó, vol. 109(3), pages 2129-2150, December.
    3. Muhammad Salman & Mohammad Masroor Ahmed & Muhammad Tanvir Afzal, 2021. "Assessment of author ranking indices based on multi-authorship," Scientometrics, Springer;Akadémiai Kiadó, vol. 126(5), pages 4153-4172, May.
    4. Hamid R. Jamali & Majid Nabavi, 2015. "Open access and sources of full-text articles in Google Scholar in different subject fields," Scientometrics, Springer;Akadémiai Kiadó, vol. 105(3), pages 1635-1651, December.
    5. Juan Miguel Campanario, 2018. "Are leaders really leading? Journals that are first in Web of Science subject categories in the context of their groups," Scientometrics, Springer;Akadémiai Kiadó, vol. 115(1), pages 111-130, April.
    6. Brito, Ricardo & Rodríguez-Navarro, Alonso, 2018. "Research assessment by percentile-based double rank analysis," Journal of Informetrics, Elsevier, vol. 12(1), pages 315-329.
    7. Michael Gusenbauer, 2019. "Google Scholar to overshadow them all? Comparing the sizes of 12 academic search engines and bibliographic databases," Scientometrics, Springer;Akadémiai Kiadó, vol. 118(1), pages 177-214, January.
    8. Parul Khurana & Kiran Sharma, 2022. "Impact of h-index on author’s rankings: an improvement to the h-index for lower-ranked authors," Scientometrics, Springer;Akadémiai Kiadó, vol. 127(8), pages 4483-4498, August.
    9. Muhammad Raheel & Samreen Ayaz & Muhammad Tanvir Afzal, 2018. "Evaluation of h-index, its variants and extensions based on publication age & citation intensity in civil engineering," Scientometrics, Springer;Akadémiai Kiadó, vol. 114(3), pages 1107-1127, March.
    10. Halevi, Gali & Moed, Henk & Bar-Ilan, Judit, 2017. "Suitability of Google Scholar as a source of scientific information and as a source of data for scientific evaluation—Review of the Literature," Journal of Informetrics, Elsevier, vol. 11(3), pages 823-834.
    11. Albarrán, Pedro & Herrero, Carmen & Ruiz-Castillo, Javier & Villar, Antonio, 2017. "The Herrero-Villar approach to citation impact," Journal of Informetrics, Elsevier, vol. 11(2), pages 625-640.
    12. Alireza Abbasi & Mahdi Jalili & Abolghasem Sadeghi-Niaraki, 2018. "Influence of network-based structural and power diversity on research performance," Scientometrics, Springer;Akadémiai Kiadó, vol. 117(1), pages 579-590, October.
    13. Martin-Martin, Alberto & Orduna-Malea, Enrique & Harzing, Anne-Wil & Delgado López-Cózar, Emilio, 2017. "Can we use Google Scholar to identify highly-cited documents?," Journal of Informetrics, Elsevier, vol. 11(1), pages 152-163.
    14. Lutz Bornmann & Klaus Wohlrabe, 2019. "Normalisation of citation impact in economics," Scientometrics, Springer;Akadémiai Kiadó, vol. 120(2), pages 841-884, August.
    15. Sergio Copiello, 2019. "The open access citation premium may depend on the openness and inclusiveness of the indexing database, but the relationship is controversial because it is ambiguous where the open access boundary lie," Scientometrics, Springer;Akadémiai Kiadó, vol. 121(2), pages 995-1018, November.
    16. Borenstein, Denis & Perlin, Marcelo S. & Imasato, Takeyoshi, 2022. "The Academic Inbreeding Controversy: Analysis and Evidence from Brazil," Journal of Informetrics, Elsevier, vol. 16(2).
    17. Dunaiski, Marcel & Geldenhuys, Jaco & Visser, Willem, 2019. "Globalised vs averaged: Bias and ranking performance on the author level," Journal of Informetrics, Elsevier, vol. 13(1), pages 299-313.
    18. Loet Leydesdorff, 2013. "An evaluation of impacts in “Nanoscience & nanotechnology”: steps towards standards for citation analysis," Scientometrics, Springer;Akadémiai Kiadó, vol. 94(1), pages 35-55, January.
    19. Enrique Orduna-Malea & Juan M. Ayllón & Alberto Martín-Martín & Emilio Delgado López-Cózar, 2015. "Methods for estimating the size of Google Scholar," Scientometrics, Springer;Akadémiai Kiadó, vol. 104(3), pages 931-949, September.
    20. John Mingers & Martin Meyer, 2017. "Normalizing Google Scholar data for use in research evaluation," Scientometrics, Springer;Akadémiai Kiadó, vol. 112(2), pages 1111-1121, August.

    More about this item

    Keywords

    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:infome:v:11:y:2017:i:2:p:541-552. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/joi .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.