IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2512.08424.html
   My bibliography  Save this paper

When Medical AI Explanations Help and When They Harm

Author

Listed:
  • Manshu Khanna
  • Ziyi Wang
  • Lijia Wei
  • Lian Xue

Abstract

We document a fundamental paradox in AI transparency: explanations improve decisions when algorithms are correct but systematically worsen them when algorithms err. In an experiment with 257 medical students making 3,855 diagnostic decisions, we find explanations increase accuracy by 6.3 percentage points when AI is correct (73% of cases) but decrease it by 4.9 points when incorrect (27% of cases). This asymmetry arises because modern AI systems generate equally persuasive explanations regardless of recommendation quality-physicians cannot distinguish helpful from misleading guidance. We show physicians treat explained AI as 15.2 percentage points more accurate than reality, with over-reliance persisting even for erroneous recommendations. Competent physicians with appropriate uncertainty suffer most from the AI transparency paradox (-12.4pp when AI errs), while overconfident novices benefit most (+9.9pp net). Welfare analysis reveals that selective transparency generates \$2.59 billion in annual healthcare value, 43% more than the \$1.82 billion from mandated universal transparency.

Suggested Citation

  • Manshu Khanna & Ziyi Wang & Lijia Wei & Lian Xue, 2025. "When Medical AI Explanations Help and When They Harm," Papers 2512.08424, arXiv.org.
  • Handle: RePEc:arx:papers:2512.08424
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2512.08424
    File Function: Latest version
    Download Restriction: no
    ---><---

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2512.08424. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.