Journal of King Saud University: Computer and Information Sciences (Jul 2024)

GAIR-U-Net: 3D guided attention inception residual u-net for brain tumor segmentation using multimodal MRI images

  • Evans Kipkoech Rutoh,
  • Qin Zhi Guang,
  • Noor Bahadar,
  • Rehan Raza,
  • Muhammad Shehzad Hanif

Journal volume & issue
Vol. 36, no. 6
p. 102086

Abstract

Read online

Deep learning technologies have led to substantial breakthroughs in the field of biomedical image analysis. Accurate brain tumor segmentation is an essential aspect of treatment planning. Radiologists agree that manual segmentation is a difficult and time-consuming task that frequently causes delays in the diagnosing process. While U-Net-based methods have been widely used for brain tumor segmentation, many challenges persist, particularly when dealing with tumors of varying sizes, locations, and shapes. Additionally, segmenting tumor regions with structures requires a comprehensive model, which can increase computational complexity and potentially cause gradient vanishing issues. This study presents a novel method called 3D Guided Attention-based deep Inception Residual U-Net (GAIR-U-Net) to address these challenges. This model combines attention mechanisms, an inception module, and residual blocks with dilated convolution to enhance feature representation and spatial context understanding. The backbone of the model is the U-Net model, which leverages the power of inception and residual connections to capture intricate patterns and hierarchical features while expanding the model’s width in three-dimensional space without significantly increasing computational complexity. The attention mechanisms play a role in focusing on important regions and areas while downgrading irrelevant details. The dilated convolutions in the network help in learning both local and global information, improving accuracy and adaptability in segmenting tumors. All the experiments in this study were carried out on multimodal MRI scans that include (T1-weighted, T1-ce, T2-weighted, and FLAIR sequences) from the BraTS 2020 dataset. The presented model is trained and tested on the same dataset, which exhibited promising performance compared to previous methods. On the BraTS 2020 validation dataset, the proposed model obtained a dice score of 0.8796, 0.8634, and 0.8441 for whole tumor (WT), tumor core (TC), and enhancing tumor (ET), respectively. These results demonstrate the model’s efficacy in precisely segmenting brain tumors across various modalities. Comparative analyses underscore the model’s versatility in handling tumor shape variations, size, and location, making it a promising solution for clinical applications.

Keywords