IDEAS home Printed from https://ideas.repec.org/a/gam/jijerp/v20y2023i4p3378-d1068780.html
   My bibliography  Save this article

Diagnostic Accuracy of Differential-Diagnosis Lists Generated by Generative Pretrained Transformer 3 Chatbot for Clinical Vignettes with Common Chief Complaints: A Pilot Study

Author

Listed:
  • Takanobu Hirosawa

    (Department of Diagnostic and Generalist Medicine, Dokkyo Medical University, Tochigi 321-0293, Japan)

  • Yukinori Harada

    (Department of Diagnostic and Generalist Medicine, Dokkyo Medical University, Tochigi 321-0293, Japan)

  • Masashi Yokose

    (Department of Diagnostic and Generalist Medicine, Dokkyo Medical University, Tochigi 321-0293, Japan)

  • Tetsu Sakamoto

    (Department of Diagnostic and Generalist Medicine, Dokkyo Medical University, Tochigi 321-0293, Japan)

  • Ren Kawamura

    (Department of Diagnostic and Generalist Medicine, Dokkyo Medical University, Tochigi 321-0293, Japan)

  • Taro Shimizu

    (Department of Diagnostic and Generalist Medicine, Dokkyo Medical University, Tochigi 321-0293, Japan)

Abstract

The diagnostic accuracy of differential diagnoses generated by artificial intelligence (AI) chatbots, including the generative pretrained transformer 3 (GPT-3) chatbot (ChatGPT-3) is unknown. This study evaluated the accuracy of differential-diagnosis lists generated by ChatGPT-3 for clinical vignettes with common chief complaints. General internal medicine physicians created clinical cases, correct diagnoses, and five differential diagnoses for ten common chief complaints. The rate of correct diagnosis by ChatGPT-3 within the ten differential-diagnosis lists was 28/30 (93.3%). The rate of correct diagnosis by physicians was still superior to that by ChatGPT-3 within the five differential-diagnosis lists (98.3% vs. 83.3%, p = 0.03). The rate of correct diagnosis by physicians was also superior to that by ChatGPT-3 in the top diagnosis (53.3% vs. 93.3%, p < 0.001). The rate of consistent differential diagnoses among physicians within the ten differential-diagnosis lists generated by ChatGPT-3 was 62/88 (70.5%). In summary, this study demonstrates the high diagnostic accuracy of differential-diagnosis lists generated by ChatGPT-3 for clinical cases with common chief complaints. This suggests that AI chatbots such as ChatGPT-3 can generate a well-differentiated diagnosis list for common chief complaints. However, the order of these lists can be improved in the future.

Suggested Citation

  • Takanobu Hirosawa & Yukinori Harada & Masashi Yokose & Tetsu Sakamoto & Ren Kawamura & Taro Shimizu, 2023. "Diagnostic Accuracy of Differential-Diagnosis Lists Generated by Generative Pretrained Transformer 3 Chatbot for Clinical Vignettes with Common Chief Complaints: A Pilot Study," IJERPH, MDPI, vol. 20(4), pages 1-10, February.
  • Handle: RePEc:gam:jijerp:v:20:y:2023:i:4:p:3378-:d:1068780
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1660-4601/20/4/3378/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1660-4601/20/4/3378/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Jiaming Zeng & Michael F. Gensheimer & Daniel L. Rubin & Susan Athey & Ross D. Shachter, 2022. "Uncovering interpretable potential confounders in electronic medical records," Nature Communications, Nature, vol. 13(1), pages 1-14, December.
    2. Nicholas Riches & Maria Panagioti & Rahul Alam & Sudeh Cheraghi-Sohi & Stephen Campbell & Aneez Esmail & Peter Bower, 2016. "The Effectiveness of Electronic Differential Diagnoses (DDX) Generators: A Systematic Review and Meta-Analysis," PLOS ONE, Public Library of Science, vol. 11(3), pages 1-26, March.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Arpan Kumar Kar & P. S. Varsha & Shivakami Rajan, 2023. "Unravelling the Impact of Generative Artificial Intelligence (GAI) in Industrial Applications: A Review of Scientific and Grey Literature," Global Journal of Flexible Systems Management, Springer;Global Institute of Flexible Systems Management, vol. 24(4), pages 659-689, December.
    2. Sarah Sandmann & Sarah Riepenhausen & Lucas Plagwitz & Julian Varghese, 2024. "Systematic analysis of ChatGPT, Google search and Llama 2 for clinical decision support tasks," Nature Communications, Nature, vol. 15(1), pages 1-8, December.
    3. Konstantinos I. Roumeliotis & Nikolaos D. Tselikas, 2023. "ChatGPT and Open-AI Models: A Preliminary Review," Future Internet, MDPI, vol. 15(6), pages 1-24, May.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yukinori Harada & Shinichi Katsukura & Ren Kawamura & Taro Shimizu, 2021. "Effects of a Differential Diagnosis List of Artificial Intelligence on Differential Diagnoses by Physicians: An Exploratory Analysis of Data from a Randomized Controlled Study," IJERPH, MDPI, vol. 18(11), pages 1-8, May.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jijerp:v:20:y:2023:i:4:p:3378-:d:1068780. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.