IDEAS home Printed from https://ideas.repec.org/a/plo/pdig00/0000792.html
   My bibliography  Save this article

A comparative study of explainability methods for whole slide classification of lymph node metastases using vision transformers

Author

Listed:
  • Jens Rahnfeld
  • Mehdi Naouar
  • Gabriel Kalweit
  • Joschka Boedecker
  • Estelle Dubruc
  • Maria Kalweit

Abstract

Recent advancements in deep learning have shown promise in enhancing the performance of medical image analysis. In pathology, automated whole slide imaging has transformed clinical workflows by streamlining routine tasks and diagnostic and prognostic support. However, the lack of transparency of deep learning models, often described as black boxes, poses a significant barrier to their clinical adoption. This study evaluates various explainability methods for Vision Transformers, assessing their effectiveness in explaining the rationale behind their classification predictions on histopathological images. Using a Vision Transformer trained on the publicly available CAMELYON16 dataset comprising of 399 whole slide images of lymph node metastases of patients with breast cancer, we conducted a comparative analysis of a diverse range of state-of-the-art techniques for generating explanations through heatmaps, including Attention Rollout, Integrated Gradients, RISE, and ViT-Shapley. Our findings reveal that Attention Rollout and Integrated Gradients are prone to artifacts, while RISE and particularly ViT-Shapley generate more reliable and interpretable heatmaps. ViT-Shapley also demonstrated faster runtime and superior performance in insertion and deletion metrics. These results suggest that integrating ViT-Shapley-based heatmaps into pathology reports could enhance trust and scalability in clinical workflows, facilitating the adoption of explainable artificial intelligence in pathology.Author summary: The objective of our research was to investigate methods for enhancing the explainability of state-of-the-art Vision Transformer models for medical image analysis in histopathology. These models are often perceived as opaque, lacking transparency, which can limit their deployment in clinical settings. In our study, we evaluated various approaches for generating visual explanations that highlight critical areas in histopathology images that influence the Vision Transformers’ decisions and serve as heatmaps for pathologists. By comparing different techniques, we identified that the ViT-Shapley method generated the most reliable and effective heatmaps. In evaluating the practical impact, we considered both clinical usability and computational efficiency. These heatmaps can help clinicians understand and trust the model’s decisions, potentially integrating these advanced AI tools into routine clinical workflows to improve diagnostic support. The findings of our study indicate that the integration of Transformer-based models with effective explainability methods can enhance the accuracy and transparency of automated whole slide image analysis, ultimately benefiting patient care.

Suggested Citation

  • Jens Rahnfeld & Mehdi Naouar & Gabriel Kalweit & Joschka Boedecker & Estelle Dubruc & Maria Kalweit, 2025. "A comparative study of explainability methods for whole slide classification of lymph node metastases using vision transformers," PLOS Digital Health, Public Library of Science, vol. 4(4), pages 1-21, April.
  • Handle: RePEc:plo:pdig00:0000792
    DOI: 10.1371/journal.pdig.0000792
    as

    Download full text from publisher

    File URL: https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000792
    Download Restriction: no

    File URL: https://journals.plos.org/digitalhealth/article/file?id=10.1371/journal.pdig.0000792&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pdig.0000792?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pdig00:0000792. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: digitalhealth (email available below). General contact details of provider: https://journals.plos.org/digitalhealth .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.