Frontiers in Neuroscience (Apr 2023)

GL-Segnet: Global-Local representation learning net for medical image segmentation

  • Di Gai,
  • Di Gai,
  • Di Gai,
  • Jiqian Zhang,
  • Yusong Xiao,
  • Weidong Min,
  • Weidong Min,
  • Weidong Min,
  • Hui Chen,
  • Qi Wang,
  • Qi Wang,
  • Qi Wang,
  • Pengxiang Su,
  • Zheng Huang,
  • Zheng Huang,
  • Zheng Huang

DOI
https://doi.org/10.3389/fnins.2023.1153356
Journal volume & issue
Vol. 17

Abstract

Read online

Medical image segmentation has long been a compelling and fundamental problem in the realm of neuroscience. This is an extremely challenging task due to the intensely interfering irrelevant background information to segment the target. State-of-the-art methods fail to consider simultaneously addressing both long-range and short-range dependencies, and commonly emphasize the semantic information characterization capability while ignoring the geometric detail information implied in the shallow feature maps resulting in the dropping of crucial features. To tackle the above problem, we propose a Global-Local representation learning net for medical image segmentation, namely GL-Segnet. In the Feature encoder, we utilize the Multi-Scale Convolution (MSC) and Multi-Scale Pooling (MSP) modules to encode the global semantic representation information at the shallow level of the network, and multi-scale feature fusion operations are applied to enrich local geometric detail information in a cross-level manner. Beyond that, we adopt a global semantic feature extraction module to perform filtering of irrelevant background information. In Attention-enhancing Decoder, we use the Attention-based feature decoding module to refine the multi-scale fused feature information, which provides effective cues for attention decoding. We exploit the structural similarity between images and the edge gradient information to propose a hybrid loss to improve the segmentation accuracy of the model. Extensive experiments on medical image segmentation from Glas, ISIC, Brain Tumors and SIIM-ACR demonstrated that our GL-Segnet is superior to existing state-of-art methods in subjective visual performance and objective evaluation.

Keywords