IDEAS home Printed from https://ideas.repec.org/a/plo/pdig00/0000680.html
   My bibliography  Save this article

CPLLM: Clinical prediction with large language models

Author

Listed:
  • Ofir Ben Shoham
  • Nadav Rappoport

Abstract

We present Clinical Prediction with Large Language Models (CPLLM), a method that involves fine-tuning a pre-trained Large Language Model (LLM) for predicting clinical disease and readmission. We utilized quantization and fine-tuned the LLM using prompts. For diagnostic predictions, we predicted whether patients would be diagnosed with a target disease during their next visit or in the subsequent diagnosis, leveraging their historical medical records. We compared our results to various baselines, including Retain and Med-BERT, the latter of which is the current state-of-the-art model for disease prediction using temporal structured EHR data. In addition, we also evaluated CPLLM’s utility in predicting hospital readmission and compared our method’s performance with benchmark baselines. Our experiments ultimately revealed that our proposed method, CPLLM, surpasses all the tested models in terms of PR-AUC and ROC-AUC metrics, providing state-of-the-art performance as a tool for predicting disease diagnosis and patient hospital readmission without requiring pre-training on medical data. Such a method can be easily implemented and integrated into the clinical workflow to help care providers plan next steps for their patients.Author summary: We introduce Clinical Prediction with Large Language Models (CPLLM), a novel method that fine-tunes a pre-trained Large Language Model (LLM) to enhance predictions of clinical diseases and patient readmissions. By leveraging historical medical records, we aimed to predict whether patients will be diagnosed with a specific disease or be readmitted. Our method is compared against the current state-of-the-art model for using structured electronic health record (EHR) data. Our findings demonstrate that CPLLM significantly outperforms state-of-the-art models in both PR-AUC and ROC-AUC metrics. Additionally, our method does not require pre-training on clinical data, making it straightforward to implement with existing LLMs. By integrating CPLLM, healthcare providers can make informed decisions about patient care, ultimately leading to better outcomes. CPLLM can be readily adopted within clinical workflows, assisting care providers in planning appropriate next steps for their patients.

Suggested Citation

  • Ofir Ben Shoham & Nadav Rappoport, 2024. "CPLLM: Clinical prediction with large language models," PLOS Digital Health, Public Library of Science, vol. 3(12), pages 1-15, December.
  • Handle: RePEc:plo:pdig00:0000680
    DOI: 10.1371/journal.pdig.0000680
    as

    Download full text from publisher

    File URL: https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000680
    Download Restriction: no

    File URL: https://journals.plos.org/digitalhealth/article/file?id=10.1371/journal.pdig.0000680&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pdig.0000680?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pdig00:0000680. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: digitalhealth (email available below). General contact details of provider: https://journals.plos.org/digitalhealth .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.