IDEAS home Printed from https://ideas.repec.org/a/plo/pdig00/0000417.html
   My bibliography  Save this article

Peer review of GPT-4 technical report and systems card

Author

Listed:
  • Jack Gallifant
  • Amelia Fiske
  • Yulia A Levites Strekalova
  • Juan S Osorio-Valencia
  • Rachael Parke
  • Rogers Mwavu
  • Nicole Martinez
  • Judy Wawira Gichoya
  • Marzyeh Ghassemi
  • Dina Demner-Fushman
  • Liam G McCoy
  • Leo Anthony Celi
  • Robin Pierce

Abstract

The study provides a comprehensive review of OpenAI’s Generative Pre-trained Transformer 4 (GPT-4) technical report, with an emphasis on applications in high-risk settings like healthcare. A diverse team, including experts in artificial intelligence (AI), natural language processing, public health, law, policy, social science, healthcare research, and bioethics, analyzed the report against established peer review guidelines. The GPT-4 report shows a significant commitment to transparent AI research, particularly in creating a systems card for risk assessment and mitigation. However, it reveals limitations such as restricted access to training data, inadequate confidence and uncertainty estimations, and concerns over privacy and intellectual property rights. Key strengths identified include the considerable time and economic investment in transparent AI research and the creation of a comprehensive systems card. On the other hand, the lack of clarity in training processes and data raises concerns about encoded biases and interests in GPT-4. The report also lacks confidence and uncertainty estimations, crucial in high-risk areas like healthcare, and fails to address potential privacy and intellectual property issues. Furthermore, this study emphasizes the need for diverse, global involvement in developing and evaluating large language models (LLMs) to ensure broad societal benefits and mitigate risks. The paper presents recommendations such as improving data transparency, developing accountability frameworks, establishing confidence standards for LLM outputs in high-risk settings, and enhancing industry research review processes. It concludes that while GPT-4’s report is a step towards open discussions on LLMs, more extensive interdisciplinary reviews are essential for addressing bias, harm, and risk concerns, especially in high-risk domains. The review aims to expand the understanding of LLMs in general and highlights the need for new reflection forms on how LLMs are reviewed, the data required for effective evaluation, and addressing critical issues like bias and risk.

Suggested Citation

  • Jack Gallifant & Amelia Fiske & Yulia A Levites Strekalova & Juan S Osorio-Valencia & Rachael Parke & Rogers Mwavu & Nicole Martinez & Judy Wawira Gichoya & Marzyeh Ghassemi & Dina Demner-Fushman & Li, 2024. "Peer review of GPT-4 technical report and systems card," PLOS Digital Health, Public Library of Science, vol. 3(1), pages 1-15, January.
  • Handle: RePEc:plo:pdig00:0000417
    DOI: 10.1371/journal.pdig.0000417
    as

    Download full text from publisher

    File URL: https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000417
    Download Restriction: no

    File URL: https://journals.plos.org/digitalhealth/article/file?id=10.1371/journal.pdig.0000417&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pdig.0000417?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Noelia Ferruz & Steffen Schmidt & Birte Höcker, 2022. "ProtGPT2 is a deep unsupervised language model for protein design," Nature Communications, Nature, vol. 13(1), pages 1-10, December.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Palistha Shrestha & Jeevan Kandel & Hilal Tayara & Kil To Chong, 2024. "Post-translational modification prediction via prompt-based fine-tuning of a GPT-2 model," Nature Communications, Nature, vol. 15(1), pages 1-13, December.
    2. Timothy Atkinson & Thomas D. Barrett & Scott Cameron & Bora Guloglu & Matthew Greenig & Charlie B. Tan & Louis Robinson & Alex Graves & Liviu Copoiu & Alexandre Laterre, 2025. "Protein sequence modelling with Bayesian flow networks," Nature Communications, Nature, vol. 16(1), pages 1-14, December.
    3. Veda Sheersh Boorla & Costas D. Maranas, 2025. "CatPred: a comprehensive framework for deep learning in vitro enzyme kinetic parameters," Nature Communications, Nature, vol. 16(1), pages 1-17, December.
    4. Yang, Ying & Zhang, Wei & Lin, Hongyi & Liu, Yang & Qu, Xiaobo, 2024. "Applying masked language model for transport mode choice behavior prediction," Transportation Research Part A: Policy and Practice, Elsevier, vol. 184(C).
    5. Wenwu Zeng & Yutao Dou & Liangrui Pan & Liwen Xu & Shaoliang Peng, 2024. "Improving prediction performance of general protein language model by domain-adaptive pretraining on DNA-binding protein," Nature Communications, Nature, vol. 15(1), pages 1-18, December.
    6. Sijie Chen & Tong Lin & Ruchira Basu & Jeremy Ritchey & Shen Wang & Yichuan Luo & Xingcan Li & Dehua Pei & Levent Burak Kara & Xiaolin Cheng, 2024. "Design of target specific peptide inhibitors using generative deep learning and molecular dynamics simulations," Nature Communications, Nature, vol. 15(1), pages 1-20, December.
    7. Sophia Vincoff & Shrey Goel & Kseniia Kholina & Rishab Pulugurta & Pranay Vure & Pranam Chatterjee, 2025. "FusOn-pLM: a fusion oncoprotein-specific language model via adjusted rate masking," Nature Communications, Nature, vol. 16(1), pages 1-11, December.
    8. Kevin E. Wu & Kevin K. Yang & Rianne Berg & Sarah Alamdari & James Y. Zou & Alex X. Lu & Ava P. Amini, 2024. "Protein structure generation via folding diffusion," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    9. Gustavo Arango-Argoty & Elly Kipkogei & Ross Stewart & Gerald J. Sun & Arijit Patra & Ioannis Kagiampakis & Etai Jacob, 2025. "Pretrained transformers applied to clinical studies improve predictions of treatment efficacy and associated biomarkers," Nature Communications, Nature, vol. 16(1), pages 1-18, December.
    10. Adibvafa Fallahpour & Vincent Gureghian & Guillaume J. Filion & Ariel B. Lindner & Amir Pandi, 2025. "CodonTransformer: a multispecies codon optimizer using context-aware neural networks," Nature Communications, Nature, vol. 16(1), pages 1-12, December.
    11. David Ding & Ada Y. Shaw & Sam Sinai & Nathan Rollins & Noam Prywes & David F. Savage & Michael T. Laub & Debora S. Marks, 2024. "Protein design using structure-based residue preferences," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    12. Amir Pandi & David Adam & Amir Zare & Van Tuan Trinh & Stefan L. Schaefer & Marie Burt & Björn Klabunde & Elizaveta Bobkova & Manish Kushwaha & Yeganeh Foroughijabbari & Peter Braun & Christoph Spahn , 2023. "Cell-free biosynthesis combined with deep learning accelerates de novo-development of antimicrobial peptides," Nature Communications, Nature, vol. 14(1), pages 1-14, December.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pdig00:0000417. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: digitalhealth (email available below). General contact details of provider: https://journals.plos.org/digitalhealth .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.