IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2024)

A Third-Modality Collaborative Learning Approach for Visible-Infrared Vessel Reidentification

  • Qi Zhang,
  • Yiming Yan,
  • Long Gao,
  • Congan Xu,
  • Nan Su,
  • Shou Feng

DOI
https://doi.org/10.1109/JSTARS.2024.3479423
Journal volume & issue
Vol. 17
pp. 19035 – 19047

Abstract

Read online

Visible Infrared Reidentification (VI-ReID) on vessels is an important component task in in the application of UAV remote sensing data, aiming to retrieve images with the same identity as a given vessel by retrieving it from image libraries containing vessels of different modalities. One of its main challenges is the huge modality difference between visible (VIS) and infrared (IR) images. Some state-of-the-art methods try to design complex networks or generative methods to mitigate the modality differences, ignoring the highly nonlinear relationship between the two modalities. To solve this problem, we propose a nonlinear Third-Modality Generator (TMG) to generate third-modality images to collaborate the original two modalities to learn together. In addition, in order to make the network focus on the image focus area and get rich local information, a Multidimensional Attention Guidance (MAG) module is proposed to guide the attention in both channel and spatial dimensions. By integrating TMG, MAG and the three designed losses (Generative Consistency Loss, Cross Modality Loss, and Modality Internal Loss) into an end-to-end learning framework, we propose a network utilizing the third-modality to collaborate learning, called third-modality collaborative network (TMCN), which has strong discriminative ability and significantly reduces the modality difference between VIS and IR. In addition, due to the lack of vessel data in the VI-ReID task, we have collected an airborne vessel cross-modality reidentification dataset (AVC-ReID) to promote the practical application of the VI-ReID task. Extensive experiments on the AVC-ReID dataset show that the proposed TMCN outperforms several other state-of-the-art methods.

Keywords