IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2025)

TPDTNet: Two-Phase Distillation Training for Visible-to-Infrared Unsupervised Domain Adaptive Object Detection

  • Siyu Wang,
  • Xiaogang Yang,
  • Ruitao Lu,
  • Shuang Su,
  • Bin Tang,
  • Tao Zhang,
  • Zhengjie Zhu

DOI
https://doi.org/10.1109/JSTARS.2025.3528057
Journal volume & issue
Vol. 18
pp. 4255 – 4272

Abstract

Read online

In remote sensing target detection cases, great challenges are faced when migrating detection models from the visible domain to the infrared domain. Cross-domain migration suffers from problems such as a lack of data annotations in the infrared domain and interdomain feature differences. To improve the detection accuracy attained for infrared images, we propose a novel two-phase distillation training network (TPDTNet). Specifically, in the first phase, we incorporate a contrastive learning framework to maximize the mutual information between the source and target domains. In addition, we construct a generative model that learns only a unidirectional modality conversion mapping, thereby capturing the associations between their visual contents. The source-domain image is converted to an image with the style of the target domain, thereby achieving image-level domain alignment. The generated image is combined with the source-domain image to form an enhanced domain for cross-modal training. Enhanced domain data are fed into the teacher network to initialize the weights and produce pseudolabels. Next, to address small remote sensing target detection tasks, we construct a multidimensional progressive feature fusion detection framework, which initially fuses two adjacent low-level feature maps and then progressively incorporates high-level features to enhance the quality of fusing nonadjacent layer features. Subsequently, a spatial-dimension convolution is integrated into the backbone network. This convolutional operation is embedded following standard convolution to mitigate the loss of detailed features. Finally, a distillation training strategy that utilizes pseudodetection labels to calculate target information. By minimizing the Kullback–Leibler divergence between the probability maps of the teacher and student networks, the channel activations are transformed into probability distributions, thereby achieving knowledge distillation. The training weights are transferred from the teacher network to the student network to maximize the detection accuracy. Extensive experiments are conducted on three optical-to-infrared datasets, and the experimental results show that our TPDTNet method achieves state-of-the-art results relative to those of the baseline model.

Keywords