IDEAS home Printed from https://ideas.repec.org/a/plo/pone00/0130753.html
   My bibliography  Save this article

Peer Review Evaluation Process of Marie Curie Actions under EU’s Seventh Framework Programme for Research

Author

Listed:
  • David G Pina
  • Darko Hren
  • Ana Marušić

Abstract

We analysed the peer review of grant proposals under Marie Curie Actions, a major EU research funding instrument, which involves two steps: an independent assessment (Individual Evaluation Report, IER) performed remotely by 3 raters, and a consensus opinion reached during a meeting by the same raters (Consensus Report, CR). For 24,897 proposals evaluated from 2007 to 2013, the association between average IER and CR scores was very high across different panels, grant calls and years. Median average deviation (AD) index, used as a measure of inter-rater agreement, was 5.4 points on a 0-100 scale (interquartile range 3.4-8.3), overall, demonstrating a good general agreement among raters. For proposals where one rater disagreed with the other two raters (n=1424; 5.7%), or where all 3 raters disagreed (n=2075; 8.3%), the average IER and CR scores were still highly associated. Disagreement was more frequent for proposals from Economics/Social Sciences and Humanities panels. Greater disagreement was observed for proposals with lower average IER scores. CR scores for proposals with initial disagreement were also significantly lower. Proposals with a large absolute difference between the average IER and CR scores (≥10 points; n=368, 1.5%) generally had lower CR scores. An inter-correlation matrix of individual raters' scores of evaluation criteria of proposals indicated that these scores were, in general, a reflection of raters’ overall scores. Our analysis demonstrated a good internal consistency and general high agreement among raters. Consensus meetings appear to be relevant for particular panels and subsets of proposals with large differences among raters’ scores.

Suggested Citation

  • David G Pina & Darko Hren & Ana Marušić, 2015. "Peer Review Evaluation Process of Marie Curie Actions under EU’s Seventh Framework Programme for Research," PLOS ONE, Public Library of Science, vol. 10(6), pages 1-15, June.
  • Handle: RePEc:plo:pone00:0130753
    DOI: 10.1371/journal.pone.0130753
    as

    Download full text from publisher

    File URL: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130753
    Download Restriction: no

    File URL: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0130753&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pone.0130753?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Hendy Abdoul & Christophe Perrey & Philippe Amiel & Florence Tubach & Serge Gottot & Isabelle Durand-Zaleski & Corinne Alberti, 2012. "Peer Review of Grant Applications: Criteria Used and Qualitative Study of Reviewer Practices," PLOS ONE, Public Library of Science, vol. 7(9), pages 1-15, September.
    2. Michael Obrecht & Karl Tibelius & Guy D'Aloisio, 2007. "Examining the value added by committee discussion in the review of applications for research awards," Research Evaluation, Oxford University Press, vol. 16(2), pages 79-91, June.
    3. Michael R Martin & Andrea Kopstein & Joy M Janice, 2010. "An Analysis of Preliminary and Post-Discussion Priority Scores for Grant Applications Peer Reviewed by the Center for Scientific Review at the NIH," PLOS ONE, Public Library of Science, vol. 5(11), pages 1-6, November.
    4. Stephen A Gallo & Afton S Carpenter & Scott R Glisson, 2013. "Teleconference versus Face-to-Face Scientific Peer Review of Grant Application: Effects on Review Outcomes," PLOS ONE, Public Library of Science, vol. 8(8), pages 1-9, August.
    5. Upali W. Jayasinghe & Herbert W. Marsh & Nigel Bond, 2003. "A multilevel cross‐classified modelling approach to peer review of grant proposals: the effects of assessor and researcher attributes on assessor ratings," Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 166(3), pages 279-300, October.
    6. Rüdiger Mutz & Lutz Bornmann & Hans-Dieter Daniel, 2012. "Heterogeneity of Inter-Rater Reliabilities of Grant Peer Reviews and Its Determinants: A General Estimating Equations Approach," PLOS ONE, Public Library of Science, vol. 7(10), pages 1-10, October.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Hren, Darko & Pina, David G. & Norman, Christopher R. & Marušić, Ana, 2022. "What makes or breaks competitive research proposals? A mixed-methods analysis of research grant evaluation reports," Journal of Informetrics, Elsevier, vol. 16(2).
    2. Bayindir, Esra Eren & Gurdal, Mehmet Yigit & Saglam, Ismail, 2019. "A Game Theoretic Approach to Peer Review of Grant Proposals," Journal of Informetrics, Elsevier, vol. 13(4).
    3. Marco Seeber & Jef Vlegels & Mattia Cattaneo, 2022. "Conditions that do or do not disadvantage interdisciplinary research proposals in project evaluation," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 73(8), pages 1106-1126, August.
    4. A. I. M. Jakaria Rahman & Raf Guns & Loet Leydesdorff & Tim C. E. Engels, 2016. "Measuring the match between evaluators and evaluees: cognitive distances between panel members and research groups at the journal level," Scientometrics, Springer;Akadémiai Kiadó, vol. 109(3), pages 1639-1663, December.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Stephen A Gallo & Afton S Carpenter & Scott R Glisson, 2013. "Teleconference versus Face-to-Face Scientific Peer Review of Grant Application: Effects on Review Outcomes," PLOS ONE, Public Library of Science, vol. 8(8), pages 1-9, August.
    2. Elena A. Erosheva & Patrícia Martinková & Carole J. Lee, 2021. "When zero may not be zero: A cautionary note on the use of inter‐rater reliability in evaluating grant peer review," Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 184(3), pages 904-919, July.
    3. Miriam L E Steiner Davis & Tiffani R Conner & Kate Miller-Bains & Leslie Shapard, 2020. "What makes an effective grants peer reviewer? An exploratory study of the necessary skills," PLOS ONE, Public Library of Science, vol. 15(5), pages 1-22, May.
    4. Patrícia Martinková & Dan Goldhaber & Elena Erosheva, 2018. "Disparities in ratings of internal and external applicants: A case for model-based inter-rater reliability," PLOS ONE, Public Library of Science, vol. 13(10), pages 1-17, October.
    5. Feliciani, Thomas & Morreau, Michael & Luo, Junwen & Lucas, Pablo & Shankar, Kalpana, 2022. "Designing grant-review panels for better funding decisions: Lessons from an empirically calibrated simulation model," Research Policy, Elsevier, vol. 51(4).
    6. Jens Jirschitzka & Aileen Oeberst & Richard Göllner & Ulrike Cress, 2017. "Inter-rater reliability and validity of peer reviews in an interdisciplinary field," Scientometrics, Springer;Akadémiai Kiadó, vol. 113(2), pages 1059-1092, November.
    7. Richard R Snell, 2015. "Menage a Quoi? Optimal Number of Peer Reviewers," PLOS ONE, Public Library of Science, vol. 10(4), pages 1-14, April.
    8. David Gurwitz & Elena Milanesi & Thomas Koenig, 2014. "Grant Application Review: The Case of Transparency," PLOS Biology, Public Library of Science, vol. 12(12), pages 1-6, December.
    9. Bayindir, Esra Eren & Gurdal, Mehmet Yigit & Saglam, Ismail, 2019. "A Game Theoretic Approach to Peer Review of Grant Proposals," Journal of Informetrics, Elsevier, vol. 13(4).
    10. Albert Banal-Estañol & Qianshuo Liu & Inés Macho-Stadler & David Pérez-Castrillo, 2021. "Similar-to-me Effects in the Grant Application Process: Applicants, Panelists, and the Likelihood of Obtaining Funds," Working Papers 1289, Barcelona School of Economics.
    11. Wen Luo & Oi-Man Kwok, 2010. "Proportional Reduction of Prediction Error in Cross-Classified Random Effects Models," Sociological Methods & Research, , vol. 39(2), pages 188-205, November.
    12. Seeber, Marco & Alon, Ilan & Pina, David G. & Piro, Fredrik Niclas & Seeber, Michele, 2022. "Predictors of applying for and winning an ERC Proof-of-Concept grant: An automated machine learning model," Technological Forecasting and Social Change, Elsevier, vol. 184(C).
    13. Lutz Bornmann, 2015. "Interrater reliability and convergent validity of F1000Prime peer review," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 66(12), pages 2415-2426, December.
    14. Manuel Bagues & Mauro Sylos-Labini & Natalia Zinovyeva, 2017. "Does the Gender Composition of Scientific Committees Matter?," American Economic Review, American Economic Association, vol. 107(4), pages 1207-1238, April.
    15. Yuetong Chen & Hao Wang & Baolong Zhang & Wei Zhang, 2022. "A method of measuring the article discriminative capacity and its distribution," Scientometrics, Springer;Akadémiai Kiadó, vol. 127(6), pages 3317-3341, June.
    16. Xiaoyu Liu & Xuefeng Wang & Donghua Zhu, 2022. "Reviewer recommendation method for scientific research proposals: a case for NSFC," Scientometrics, Springer;Akadémiai Kiadó, vol. 127(6), pages 3343-3366, June.
    17. Stephen A Gallo & Joanne H Sullivan & Scott R Glisson, 2016. "The Influence of Peer Reviewer Expertise on the Evaluation of Research Funding Applications," PLOS ONE, Public Library of Science, vol. 11(10), pages 1-18, October.
    18. Mario Paolucci & Francisco Grimaldo, 2014. "Mechanism change in a simulation of peer review: from junk support to elitism," Scientometrics, Springer;Akadémiai Kiadó, vol. 99(3), pages 663-688, June.
    19. Marsh, Herbert W. & Jayasinghe, Upali W. & Bond, Nigel W., 2011. "Gender differences in peer reviews of grant applications: A substantive-methodological synergy in support of the null hypothesis model," Journal of Informetrics, Elsevier, vol. 5(1), pages 167-180.
    20. Bornmann, Lutz & Mutz, Rüdiger & Hug, Sven E. & Daniel, Hans-Dieter, 2011. "A multilevel meta-analysis of studies reporting correlations between the h index and 37 different h index variants," Journal of Informetrics, Elsevier, vol. 5(3), pages 346-359.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0130753. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.