IEEE Access (Jan 2024)

Visible-Infrared Cross-Modality Person Re-Identification via Adaptive Weighted Triplet Loss and Progressive Training

  • Ling Song,
  • Minggong Yu,
  • Delin Sun,
  • Xionghu Zhong

DOI
https://doi.org/10.1109/ACCESS.2024.3510425
Journal volume & issue
Vol. 12
pp. 181799 – 181807

Abstract

Read online

Visible-infrared cross-modality person re-identification (VI-ReID) aims to search the same person images across multiple non-overlapping cameras of different modalities, which has a wider application scenario than the single-modality person re-identification task. The main difficulty of VI-ReID is the large visual difference between the visible and infrared modalities. In this paper, an adaptive weighted triplet loss is proposed, which can adaptively adjust the weights of triplet samples. This method can reduce the impact of outlier samples, and mainly focus on the major mid-hard samples. We also introduce a channel random shuffle data augmentation method. It can be easily integrated into the existing framework. This data augmentation method can reduce the dependence on color information, and improve the robustness against color variations. A progressive training strategy is employed, which can further improve the performance. Experiments show that our proposed methods achieve state-of-the-art results on two public datasets SYSU-MM01 and RegDB without additional computation.

Keywords