IDEAS home Printed from https://ideas.repec.org/a/plo/pmed00/0040040.html
   My bibliography  Save this article

The Relationship of Previous Training and Experience of Journal Peer Reviewers to Subsequent Review Quality

Author

Listed:
  • Michael L Callaham
  • John Tercier

Abstract

Background: Peer review is considered crucial to the selection and publication of quality science, but very little is known about the previous experiences and training that might identify high-quality peer reviewers. The reviewer selection processes of most journals, and thus the qualifications of their reviewers, are ill defined. More objective selection of peer reviewers might improve the journal peer review process and thus the quality of published science. Methods and Findings: 306 experienced reviewers (71% of all those associated with a specialty journal) completed a survey of past training and experiences postulated to improve peer review skills. Reviewers performed 2,856 reviews of 1,484 separate manuscripts during a four-year study period, all prospectively rated on a standardized quality scale by editors. Multivariable analysis revealed that most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training). Being on an editorial board and doing formal grant (study section) review were each predictors for only one of our two comparisons. However, the predictive power of all variables was weak. Conclusions: Our study confirms that there are no easily identifiable types of formal training or experience that predict reviewer performance. Skill in scientific peer review may be as ill defined and hard to impart as is “common sense.” Without a better understanding of those skills, it seems unlikely journals and editors will be successful in systematically improving their selection of reviewers. This inability to predict performance makes it imperative that all but the smallest journals implement routine review ratings systems to routinely monitor the quality of their reviews (and thus the quality of the science they publish). A survey of experienced reviewers, asked about training they had received in peer review, found there are no easily identifiable types of formal training and experience that predict reviewer performance. Background.: When medical researchers have concluded their research and written it up, the next step is to get it published as an article in a journal, so that the findings can be circulated widely. These published findings help determine subsequent research and clinical use. The editors of reputable journals, including PLoS Medicine, have to decide whether the articles sent to them are of good quality and accurate and whether they will be of interest to the readers of their journal. To do this they need to obtain specialist advice, so they contact experts in the topic of the research article and ask them to write reports. This is the process of scientific peer review, and the experts who write such reports are known as “peer reviewers.” Although the editors make the final decision, the advice and criticism of these peer reviewers to the editors is essential in making decisions on publication, and usually in requiring authors to make changes to their manuscript. The contribution that peer reviewers have made to the article by the time it is finally published may, therefore, be quite considerable. Why Was This Study Done?: It is hard for journal editors to know who will make a good peer reviewer, and there is no proven system for choosing them. The authors of this study wanted to identify the previous experiences and training that make up the background of good peer reviewers and compare them with the quality of the reviews provided. This would help journal editors select good people for the task in future, and as a result will affect the quality of science they publish for readers, including other researchers. What Did the Researchers Do and Find?: The authors contacted all the regular reviewers from one specialist journal (Annals of Emergency Medicine). A total of 306 of these experienced reviewers (71% of all those associated with the journal) completed a survey of past training and experiences that might be expected to improve peer review skills. These reviewers had done 2,856 reviews of 1,484 separate manuscripts during a four-year study period, and during this time the quality of the reviews had been rated by the journal's editors. Surprisingly, most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training), and even these were only weak predictors. What Do These Findings Mean?: This study suggest that there are no easily identifiable types of formal training or experience that predict peer reviewer performance, although it is clear that some reviewers (and reviews) are better than others. The authors suggest that it is essential therefore that journals routinely monitor the quality of reviews submitted to them to ensure they are getting good advice (a practice that is not universal).

Suggested Citation

  • Michael L Callaham & John Tercier, 2007. "The Relationship of Previous Training and Experience of Journal Peer Reviewers to Subsequent Review Quality," PLOS Medicine, Public Library of Science, vol. 4(1), pages 1-9, January.
  • Handle: RePEc:plo:pmed00:0040040
    DOI: 10.1371/journal.pmed.0040040
    as

    Download full text from publisher

    File URL: https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0040040
    Download Restriction: no

    File URL: https://journals.plos.org/plosmedicine/article/file?id=10.1371/journal.pmed.0040040&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pmed.0040040?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Feliciani, Thomas & Morreau, Michael & Luo, Junwen & Lucas, Pablo & Shankar, Kalpana, 2022. "Designing grant-review panels for better funding decisions: Lessons from an empirically calibrated simulation model," Research Policy, Elsevier, vol. 51(4).
    2. Hendy Abdoul & Christophe Perrey & Philippe Amiel & Florence Tubach & Serge Gottot & Isabelle Durand-Zaleski & Corinne Alberti, 2012. "Peer Review of Grant Applications: Criteria Used and Qualitative Study of Reviewer Practices," PLOS ONE, Public Library of Science, vol. 7(9), pages 1-15, September.
    3. José Luis Ortega, 2017. "Are peer-review activities related to reviewer bibliometric performance? A scientometric analysis of Publons," Scientometrics, Springer;Akadémiai Kiadó, vol. 112(2), pages 947-962, August.
    4. Dimity Stephen, 2022. "Peer reviewers equally critique theory, method, and writing, with limited effect on the final content of accepted manuscripts," Scientometrics, Springer;Akadémiai Kiadó, vol. 127(6), pages 3413-3435, June.
    5. Hendy Abdoul & Christophe Perrey & Florence Tubach & Philippe Amiel & Isabelle Durand-Zaleski & Corinne Alberti, 2012. "Non-Financial Conflicts of Interest in Academic Grant Evaluation: A Qualitative Study of Multiple Stakeholders in France," PLOS ONE, Public Library of Science, vol. 7(4), pages 1-10, April.
    6. Jens Jirschitzka & Aileen Oeberst & Richard Göllner & Ulrike Cress, 2017. "Inter-rater reliability and validity of peer reviews in an interdisciplinary field," Scientometrics, Springer;Akadémiai Kiadó, vol. 113(2), pages 1059-1092, November.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pmed00:0040040. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosmedicine (email available below). General contact details of provider: https://journals.plos.org/plosmedicine/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.