Author
Listed:
- Mohammad Zahedipour
- Mohammad Saniee Abadeh
- Shakila Shojaei
Abstract
Convolutional neural networks (CNNs) are widely recognized for their high precision in image classification. Nevertheless, the lack of transparency in these black-box models raises concerns in sensitive domains such as healthcare, where understanding the knowledge acquired to derive outcomes can be challenging. To address this concern, several strategies within the field of explainable AI (XAI) have been developed to enhance model interpretability. This study introduces a novel XAI technique, GASHAP, which integrates a genetic algorithm (GA) with SHapley Additive exPlanations (SHAP) to improve the explainability of our 3D convolutional neural network (3D-CNN) model. The model is designed to classify magnetic resonance imaging (MRI) brain scans of individuals with Alzheimer’s disease and cognitively normal controls. Deep SHAP, a widely used XAI technique, facilitates the understanding of the influence exerted by various voxels on the final classification outcome (Lundberg SM, Lee SI. A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, 2017. 4765–74. https://doi.org/10.5555/3295222.3295230). However, voxel-level representation alone lacks interpretive clarity. Therefore, the objective of this study is to provide findings at the level of anatomically defined brain regions. Critical regions are identified by leveraging their SHAP values, followed by the application of a genetic algorithm to generate a definitive mask highlighting the most significant regions for Alzheimer’s disease diagnosis (Shahamat H, Saniee Abadeh M. Brain MRI analysis using a deep learning based evolutionary approach. Neural Netw. 2020;126:218–34. https://doi.org/10.1016/j.neunet.2020.03.017 PMID: 32259762). The research commenced by implementing a 3D-CNN for MRI image classification. Subsequently, the GASHAP technique was applied to enhance model transparency. The final result is a brain mask that delineates the pertinent regions crucial for Alzheimer’s disease diagnosis. Finally, a comparative analysis is conducted between our findings and those of previous studies.
Suggested Citation
Mohammad Zahedipour & Mohammad Saniee Abadeh & Shakila Shojaei, 2026.
"Alzheimer’s disease prediction via an explainable CNN using genetic algorithm and SHAP values,"
PLOS ONE, Public Library of Science, vol. 21(1), pages 1-26, January.
Handle:
RePEc:plo:pone00:0337800
DOI: 10.1371/journal.pone.0337800
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0337800. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.