IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2024)
Multilevel Feature Interaction Network for Remote Sensing Images Semantic Segmentation
Abstract
High-spatial resolution (HSR) remote sensing images present significant challenges due to their highly complex backgrounds, a large number of densely distributed small targets, and the potential for confusion with land targets. These characteristics render existing methods ineffective in accurately segmenting small targets and prone to boundary blurring. In response to these challenges, we introduce a novel multilevel feature interaction network (MFIN). The MFIN model was designed as a dual-branch U-shaped interactive decoding structure that effectively achieves semantic segmentation and edge detection. Notably, this study is the first to address ways to enhance the performance for HSR remote sensing image analysis by iteratively refining features at multilevels for different tasks. We designed the feature interaction module (FIM), which refines semantic features through multiscale attention and interacts with edge features of the same scale for optimization, then serving as input for iterative optimization in the next scale's FIM. In addition, a lightweight global feature module is designed to adaptively extract global contextual information from different scales features, thereby enhancing the semantic accuracy of the features. Furthermore, to mitigate the semantic dilution issues caused by upsampling, a semantic-guided fusion module is introduced to enhance the propagation of rich semantic information among features. The proposed methods achieve state-of-the-art segmentation performance across four publicly available remote sensing datasets: Potsdam, Vaihingen, LoveDA, and UAVid. Notably, our MFIN has only 15.4 MB parameters and 34.2 GB GFLOPs, achieving an optimal balance between accuracy and efficiency.
Keywords