Author
Listed:
- Sikandar Afridi
- Atif Jan
- Muhammad Abeer Irfan
- Muhammad Irfan Khattak
- Taimur Ahmed Khan
Abstract
Accurate volumetric segmentation of 3D medical imaging modalities is critical for therapy planning and clinical diagnosis, particularly for brain tumor delineation. Traditional convolutional neural network (CNN)-based architectures face challenges while capturing global contextual information and modeling long-range dependencies in complex 3D volumetric data, limiting their segmentation performance. Transformer-based models have emerged as promising alternatives to CNNs for such tasks, addressing their limitations in capturing global spatial dependencies. We propose 3D-ViT-UNet, a novel U-shaped vision transformer (ViT)-based encoder-decoder architecture for end-to-end volumetric brain tumor segmentation. The model employs 3D Window Multi-Head Self-Attention (3D-W-MSA) to capture local features and a 3D Dilated-Window Multi-Head Self-Attention (3D-DW-MSA) to capture global features while reducing computational complexity. Moreover, for preserving absolute and relative positional information and preventing permutation equivalence limitation in transformers, a dynamic position encoding strategy is integrated. The proposed model demonstrates state-of-the-art (SOTA) performance for brain tumor segmentation on the BraTS 2020 dataset. It achieves a superior average Dice Similarity Coefficient (DSC) of 84.81% and a Hausdorff Distance (HD) of 4.87 mm with reduced computational complexity compared to existing methods. Also, an improvement in delineation of tumor boundaries and accurate segmentation across modalities is demonstrated through the qualitative results. Extensive quantitative and qualitative evaluations highlight the capability of 3D-ViT-UNet to achieve high accuracy with a smaller model size and lower FLOPs, making it an effective and efficient solution for clinical applications involving volumetric brain tumor segmentation.Author summary: Brain tumors have varying sizes and shapes across MRIs; therefore, their accurate volumetric segmentation is a challenging task before therapies and surgeries. It is a time-consuming manual task, and results can differ between experts. We present 3D-ViT-UNet, an end-to-end volumetric segmentation model that processes an MRI as a volume rather than independent slices. Our design combines two attention mechanisms: 3D window attention to capture fine local structure and 3D dilated-window attention to efficiently capture broader context for full tumor extent. To keep the correct spatial order of input 3D patches, we add a dynamic, input-dependent position encoding that adapts to each MRI scan. Our method achieved state-of-the-art performance with a DSC of 84.81% and an average HD95 of 4.87 mm on the BraTS 2020 dataset. This confirms that 3D-ViT-UNet is an effective and efficient solution for clinical applications, providing high segmentation accuracy with a smaller model size and reduced computational cost.
Suggested Citation
Sikandar Afridi & Atif Jan & Muhammad Abeer Irfan & Muhammad Irfan Khattak & Taimur Ahmed Khan, 2026.
"3D-ViT-UNet: 3D Vision transformer based Unet-like model for Volumetric Brain Tumor Segmentation,"
PLOS Digital Health, Public Library of Science, vol. 5(3), pages 1-27, March.
Handle:
RePEc:plo:pdig00:0001323
DOI: 10.1371/journal.pdig.0001323
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pdig00:0001323. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: digitalhealth (email available below). General contact details of provider: https://journals.plos.org/digitalhealth .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.