IDEAS home Printed from https://ideas.repec.org/a/plo/pdig00/0000801.html

Implicit versus explicit Bayesian priors for epistemic uncertainty estimation in clinical decision support

Author

Listed:
  • Malte Blattmann
  • Adrian Lindenmeyer
  • Stefan Franke
  • Thomas Neumuth
  • Daniel Schneider

Abstract

Deep learning models offer transformative potential for personalized medicine by providing automated, data-driven support for complex clinical decision-making. However, their reliability degrades on out-of-distribution inputs, and traditional point-estimate predictors can give overconfident outputs even in regions where the model has little evidence. This shortcoming highlights the need for decision-support systems that quantify and communicate per-query epistemic (knowledge) uncertainty. Approximate Bayesian deep learning methods address this need by introducing principled uncertainty estimates over the model’s function. In this work, we compare three such methods on the task of predicting prostate cancer–specific mortality for treatment planning, using data from the PLCO cancer screening trial. All approaches achieve strong discriminative performance (AUROC = 0.86) and produce well-calibrated probabilities in-distribution, yet they differ markedly in the fidelity of their epistemic uncertainty estimates. We show that implicit functional-prior methods-specifically neural network ensembles and factorized weight prior variational Bayesian neural networks—exhibit reduced fidelity when approximating the posterior distribution and yield systematically biased estimates of epistemic uncertainty. By contrast, models employing explicitly defined, distance-aware priors—such as spectral-normalized neural Gaussian processes (SNGP)—provide more accurate posterior approximations and more reliable uncertainty quantification. These properties make explicitly distance-aware architectures particularly promising for building trustworthy clinical decision-support tools.Author summary: In this study, we address a critical challenge in applying AI to personalized medicine: models often make confident predictions even when faced with patient data unlike anything they’ve seen before. We evaluated three strategies for helping these models recognize and signal their own uncertainty, using real-world prostate cancer screening data. While all approaches performed well on familiar cases, they differed in how reliably they indicated doubt on unfamiliar patients. We discovered that methods explicitly designed to gauge how “far” a new patient’s data lies from prior examples produced far more trustworthy uncertainty estimates than techniques relying on hidden assumptions. By clearly identifying when the model is unsure, these approaches can help clinicians avoid over-reliance on AI recommendations. Our findings suggest that uncertainty-aware models could serve as safer, more transparent partners in treatment planning. Ultimately, this work takes us a step closer to AI systems that not only predict health outcomes but also responsibly signal when they might be guessing—an essential feature for trustworthy clinical decision support.

Suggested Citation

  • Malte Blattmann & Adrian Lindenmeyer & Stefan Franke & Thomas Neumuth & Daniel Schneider, 2025. "Implicit versus explicit Bayesian priors for epistemic uncertainty estimation in clinical decision support," PLOS Digital Health, Public Library of Science, vol. 4(7), pages 1-23, July.
  • Handle: RePEc:plo:pdig00:0000801
    DOI: 10.1371/journal.pdig.0000801
    as

    Download full text from publisher

    File URL: https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000801
    Download Restriction: no

    File URL: https://journals.plos.org/digitalhealth/article/file?id=10.1371/journal.pdig.0000801&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pdig.0000801?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pdig00:0000801. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: digitalhealth (email available below). General contact details of provider: https://journals.plos.org/digitalhealth .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.