IDEAS home Printed from https://ideas.repec.org/a/nat/natcom/v16y2025i1d10.1038_s41467-025-56989-2.html
   My bibliography  Save this article

Benchmarking large language models for biomedical natural language processing applications and recommendations

Author

Listed:
  • Qingyu Chen

    (Yale University
    National Institutes of Health)

  • Yan Hu

    (University of Texas Health Science at Houston)

  • Xueqing Peng

    (Yale University)

  • Qianqian Xie

    (Yale University)

  • Qiao Jin

    (National Institutes of Health)

  • Aidan Gilson

    (Yale University)

  • Maxwell B. Singer

    (Yale University)

  • Xuguang Ai

    (Yale University)

  • Po-Ting Lai

    (National Institutes of Health)

  • Zhizheng Wang

    (National Institutes of Health)

  • Vipina K. Keloth

    (Yale University)

  • Kalpana Raja

    (Yale University)

  • Jimin Huang

    (Yale University)

  • Huan He

    (Yale University)

  • Fongci Lin

    (Yale University)

  • Jingcheng Du

    (University of Texas Health Science at Houston)

  • Rui Zhang

    (University of Minnesota
    University of Minnesota)

  • W. Jim Zheng

    (University of Texas Health Science at Houston)

  • Ron A. Adelman

    (Yale University)

  • Zhiyong Lu

    (National Institutes of Health)

  • Hua Xu

    (Yale University)

Abstract

The rapid growth of biomedical literature poses challenges for manual knowledge curation and synthesis. Biomedical Natural Language Processing (BioNLP) automates the process. While Large Language Models (LLMs) have shown promise in general domains, their effectiveness in BioNLP tasks remains unclear due to limited benchmarks and practical guidelines. We perform a systematic evaluation of four LLMs—GPT and LLaMA representatives—on 12 BioNLP benchmarks across six applications. We compare their zero-shot, few-shot, and fine-tuning performance with the traditional fine-tuning of BERT or BART models. We examine inconsistencies, missing information, hallucinations, and perform cost analysis. Here, we show that traditional fine-tuning outperforms zero- or few-shot LLMs in most tasks. However, closed-source LLMs like GPT-4 excel in reasoning-related tasks such as medical question answering. Open-source LLMs still require fine-tuning to close performance gaps. We find issues like missing information and hallucinations in LLM outputs. These results offer practical insights for applying LLMs in BioNLP.

Suggested Citation

  • Qingyu Chen & Yan Hu & Xueqing Peng & Qianqian Xie & Qiao Jin & Aidan Gilson & Maxwell B. Singer & Xuguang Ai & Po-Ting Lai & Zhizheng Wang & Vipina K. Keloth & Kalpana Raja & Jimin Huang & Huan He & , 2025. "Benchmarking large language models for biomedical natural language processing applications and recommendations," Nature Communications, Nature, vol. 16(1), pages 1-16, December.
  • Handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-56989-2
    DOI: 10.1038/s41467-025-56989-2
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41467-025-56989-2
    File Function: Abstract
    Download Restriction: no

    File URL: https://libkey.io/10.1038/s41467-025-56989-2?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Karan Singhal & Shekoofeh Azizi & Tao Tu & S. Sara Mahdavi & Jason Wei & Hyung Won Chung & Nathan Scales & Ajay Tanwani & Heather Cole-Lewis & Stephen Pfohl & Perry Payne & Martin Seneviratne & Paul G, 2023. "Publisher Correction: Large language models encode clinical knowledge," Nature, Nature, vol. 620(7973), pages 19-19, August.
    2. Johannes Stricker & Anita Chasiotis & Martin Kerwer & Armin Günther, 2020. "Scientific abstracts and plain language summaries in psychology: A comparison based on readability indices," PLOS ONE, Public Library of Science, vol. 15(4), pages 1-9, April.
    3. Karan Singhal & Shekoofeh Azizi & Tao Tu & S. Sara Mahdavi & Jason Wei & Hyung Won Chung & Nathan Scales & Ajay Tanwani & Heather Cole-Lewis & Stephen Pfohl & Perry Payne & Martin Seneviratne & Paul G, 2023. "Large language models encode clinical knowledge," Nature, Nature, vol. 620(7972), pages 172-180, August.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Maxime Griot & Coralie Hemptinne & Jean Vanderdonckt & Demet Yuksel, 2025. "Large Language Models lack essential metacognition for reliable medical reasoning," Nature Communications, Nature, vol. 16(1), pages 1-10, December.
    2. Ali Nemati & Mohammad Assadi Shalmani & Qiang Lu & Jake Luo, 2025. "Benchmarking Large Language Models from Open and Closed Source Models to Apply Data Annotation for Free-Text Criteria in Healthcare," Future Internet, MDPI, vol. 17(4), pages 1-27, March.
    3. Cheng-Yi Li & Kao-Jung Chang & Cheng-Fu Yang & Hsin-Yu Wu & Wenting Chen & Hritik Bansal & Ling Chen & Yi-Ping Yang & Yu-Chun Chen & Shih-Pin Chen & Shih-Jen Chen & Jiing-Feng Lirng & Kai-Wei Chang & , 2025. "Towards a holistic framework for multimodal LLM in 3D brain CT radiology report generation," Nature Communications, Nature, vol. 16(1), pages 1-14, December.
    4. Tingmingke Lu, 2025. "Maximum Hallucination Standards for Domain-Specific Large Language Models," Papers 2503.05481, arXiv.org.
    5. Zheng, Shuwen & Pan, Kai & Liu, Jie & Chen, Yunxia, 2024. "Empirical study on fine-tuning pre-trained large language models for fault diagnosis of complex systems," Reliability Engineering and System Safety, Elsevier, vol. 252(C).
    6. Zhou, Zhen & Gu, Ziyuan & Qu, Xiaobo & Liu, Pan & Liu, Zhiyuan & Yu, Wenwu, 2024. "Urban mobility foundation model: A literature review and hierarchical perspective," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 192(C).
    7. Zhenjia Chen & Zhenyuan Lin & Ji Yang & Cong Chen & Di Liu & Liuting Shan & Yuanyuan Hu & Tailiang Guo & Huipeng Chen, 2024. "Cross-layer transmission realized by light-emitting memristor for constructing ultra-deep neural network with transfer learning ability," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    8. Yujin Oh & Sangjoon Park & Hwa Kyung Byun & Yeona Cho & Ik Jae Lee & Jin Sung Kim & Jong Chul Ye, 2024. "LLM-driven multimodal target volume contouring in radiation oncology," Nature Communications, Nature, vol. 15(1), pages 1-14, December.
    9. Chen Gao & Xiaochong Lan & Nian Li & Yuan Yuan & Jingtao Ding & Zhilun Zhou & Fengli Xu & Yong Li, 2024. "Large language models empowered agent-based modeling and simulation: a survey and perspectives," Palgrave Communications, Palgrave Macmillan, vol. 11(1), pages 1-24, December.
    10. Juexiao Zhou & Xiaonan He & Liyuan Sun & Jiannan Xu & Xiuying Chen & Yuetan Chu & Longxi Zhou & Xingyu Liao & Bin Zhang & Shawn Afvari & Xin Gao, 2024. "Pre-trained multimodal large language model enhances dermatological diagnosis using SkinGPT-4," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    11. Qin, Hongyi & Zhu, Yifan & Jiang, Yan & Luo, Siqi & Huang, Cui, 2024. "Examining the impact of personalization and carefulness in AI-generated health advice: Trust, adoption, and insights in online healthcare consultations experiments," Technology in Society, Elsevier, vol. 79(C).
    12. Ching-Nam Hang & Pei-Duo Yu & Roberto Morabito & Chee-Wei Tan, 2024. "Large Language Models Meet Next-Generation Networking Technologies: A Review," Future Internet, MDPI, vol. 16(10), pages 1-29, October.
    13. Venkat Ram Reddy Ganuthula & Krishna Kumar Balaraman, 2025. "The Paradox of Professional Input: How Expert Collaboration with AI Systems Shapes Their Future Value," Papers 2504.12654, arXiv.org.
    14. Kevin Wu & Eric Wu & Kevin Wei & Angela Zhang & Allison Casasola & Teresa Nguyen & Sith Riantawan & Patricia Shi & Daniel Ho & James Zou, 2025. "An automated framework for assessing how well LLMs cite relevant medical references," Nature Communications, Nature, vol. 16(1), pages 1-10, December.
    15. van Kolfschooten, Hannah & van Oirschot, Janneke, 2024. "The EU Artificial Intelligence Act (2024): Implications for healthcare," Health Policy, Elsevier, vol. 149(C).
    16. Soroosh Tayebi Arasteh & Tianyu Han & Mahshad Lotfinia & Christiane Kuhl & Jakob Nikolas Kather & Daniel Truhn & Sven Nebelung, 2024. "Large language models streamline automated machine learning for clinical studies," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    17. Hossam A. Gabber & Omar S. Hemied, 2024. "Domain-Specific Large Language Model for Renewable Energy and Hydrogen Deployment Strategies," Energies, MDPI, vol. 17(23), pages 1-25, December.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-56989-2. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.