IEEE Access (Jan 2022)
Hybrid-DANet: An Encoder-Decoder Based Hybrid Weights Alignment With Multi-Dilated Attention Network for Automatic Brain Tumor Segmentation
Abstract
Gliomas are the most common and highly growing tumors lead to high mortality rate in their highest grade. The early diagnosis of gliomas, and treatment planning are most important steps to enhance the life expectancy of a patient. Among the modern imaging techniques, magnetic resonance imaging (MRI) is the most robust and widely used technique to visualize the brain tumor. The CNN-based networks mainly depend on multi-branch and increasing the depth/width of the network to enhance the segmentation accuracy at the cost of high computational cost. To mitigate these drawbacks we therefore, propose a hybrid weights alignment with multi-dilated attention network for automatic brain tumor segmentation (Hybrid-DANet). It employs multiple modules incorporated on baseline encoder-decoder architecture. Firstly, we proposed a novel hybrid weight alignment with multi-dilated attention module (HWADA) is used between the skip connections. It has capability to obtain the different sets of aligned weight by using different dilation schemes. Different weight alignments play a vital role to obtain very precise targeted information while negating the less informative part. It utilizes the low and high level information with skip connections across each branch of encoder and decoder. Secondly, we incorporated a multi channel multi scale module (MCS) on the baseline module. It consists of multiple channels used to extract the channel-wise information with more reduced computational cost. To reduce the resultant saturated accuracy due to vanishing gradient problem, we incorporated the residual module (RM). Thus, the RM, and MCS are useful to obtain the deep, intrinsic, channel-wise feature without expansion of depth and height. Whereas the novel HWADA not only propagates the low level information but also process it to obtain more semantic features used for decoder. We have tested our proposed technique on well-known datasets; BraTS 2017, and 2018 with comparable performance to counter-part.
Keywords