IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2025)

Toward Model-Independent Separative Training for Deep Hyperspectral Anomaly Detection With Mask Guidance

  • Xi Su,
  • Xiangfei Shen,
  • Haijun Liu,
  • Lihui Chen,
  • Gemine Vivone,
  • Xichuan Zhou

DOI
https://doi.org/10.1109/jstars.2025.3580751
Journal volume & issue
Vol. 18
pp. 15412 – 15426

Abstract

Read online

Hyperspectral anomaly detection (HAD) aims to recognize a minority of anomalies that are spectrally different from their surrounding background without prior knowledge. Deep neural networks (DNNs) have shown remarkable performance in this field thanks to their powerful ability to model the complex background. However, during the background modeling, DNNs may encounter the identical mapping problem (IMP) that incorporate part of the nonbackground components into its estimated version, which leads to a significant performance reduction. To address this limitation, we propose a model-independent separative training strategy for DNNs, named DeepSeT. Our method introduces a latent binary mask to identify the potential anomalies and background to guide the training. Based on this mask, we exploit a separative loss function that reconstructs the background and suppresses the anomalies, thus learning a pure background. For efficient anomaly suppression, we propose an anomaly suppression regularization utilizing the second-order Laplacian of Gaussian operator that mitigates the significant variations of the anomalies in the loss function. To keep the separability, the mask is periodically updated during the training by binarizing the reconstruction errors. Our training strategy is model-independent, making it possible to apply to different DNN structures. Our DeepSeT shows superior results compared to some state-of-the-art methods on benchmark datasets. In addition, to demonstrate its model-independent ability, we applied our training strategy to different deep network structures, achieving improved detection performance compared to their original versions.

Keywords