IDEAS home Printed from https://ideas.repec.org/a/plo/pdig00/0000651.html
   My bibliography  Save this article

Bias in medical AI: Implications for clinical decision-making

Author

Listed:
  • James L Cross
  • Michael A Choma
  • John A Onofrey

Abstract

Biases in medical artificial intelligence (AI) arise and compound throughout the AI lifecycle. These biases can have significant clinical consequences, especially in applications that involve clinical decision-making. Left unaddressed, biased medical AI can lead to substandard clinical decisions and the perpetuation and exacerbation of longstanding healthcare disparities. We discuss potential biases that can arise at different stages in the AI development pipeline and how they can affect AI algorithms and clinical decision-making. Bias can occur in data features and labels, model development and evaluation, deployment, and publication. Insufficient sample sizes for certain patient groups can result in suboptimal performance, algorithm underestimation, and clinically unmeaningful predictions. Missing patient findings can also produce biased model behavior, including capturable but nonrandomly missing data, such as diagnosis codes, and data that is not usually or not easily captured, such as social determinants of health. Expertly annotated labels used to train supervised learning models may reflect implicit cognitive biases or substandard care practices. Overreliance on performance metrics during model development may obscure bias and diminish a model’s clinical utility. When applied to data outside the training cohort, model performance can deteriorate from previous validation and can do so differentially across subgroups. How end users interact with deployed solutions can introduce bias. Finally, where models are developed and published, and by whom, impacts the trajectories and priorities of future medical AI development. Solutions to mitigate bias must be implemented with care, which include the collection of large and diverse data sets, statistical debiasing methods, thorough model evaluation, emphasis on model interpretability, and standardized bias reporting and transparency requirements. Prior to real-world implementation in clinical settings, rigorous validation through clinical trials is critical to demonstrate unbiased application. Addressing biases across model development stages is crucial for ensuring all patients benefit equitably from the future of medical AI.Author summary: In this work, we explore the challenges of biases that emerge in medical artificial intelligence (AI). These biases, if not adequately addressed, can lead to poor clinical decisions and worsen existing healthcare inequalities by influencing an AI’s decisions in ways that disadvantage some patient groups over others. We discuss several stages in the process of developing a medical AI model where bias can emerge, including collecting data, training a model, and real-world application. For instance, the way data is collected can exclude or misrepresent certain patient populations, leading to less effective and inequitable AI systems. We provide examples, both hypothetical and real, to illustrate how these biases can alter clinical outcomes. These examples show that biases are not just possible; they are a significant risk if not actively countered. Our review stresses the importance of diverse and comprehensive data sets, sophisticated statistical methods to remove biases, and clear reporting standards—key components of a future where medical AI works equitably and supports high-quality clinical care for everyone.

Suggested Citation

  • James L Cross & Michael A Choma & John A Onofrey, 2024. "Bias in medical AI: Implications for clinical decision-making," PLOS Digital Health, Public Library of Science, vol. 3(11), pages 1-19, November.
  • Handle: RePEc:plo:pdig00:0000651
    DOI: 10.1371/journal.pdig.0000651
    as

    Download full text from publisher

    File URL: https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000651
    Download Restriction: no

    File URL: https://journals.plos.org/digitalhealth/article/file?id=10.1371/journal.pdig.0000651&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pdig.0000651?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pdig00:0000651. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: digitalhealth (email available below). General contact details of provider: https://journals.plos.org/digitalhealth .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.