IEEE Access (Jan 2019)

Visible Infrared Cross-Modality Person Re-Identification Network Based on Adaptive Pedestrian Alignment

  • Bo Li,
  • Xiaohong Wu,
  • Qiang Liu,
  • Xiaohai He,
  • Fei Yang

DOI
https://doi.org/10.1109/ACCESS.2019.2955930
Journal volume & issue
Vol. 7
pp. 171485 – 171494

Abstract

Read online

Cross-modality person re-identification between the visible domain and infrared domain is important but extremely challenging for night-time surveillance. Besides the cross-modality discrepancies caused by different camera spectrums, visible infrared person re-identification (VI-REID) still suffers from much pedestrian misalignment as well as the variations caused by different camera viewpoints and various pedestrian pose deformations like traditional person re-identification. In this paper, we propose a multi-path adaptive pedestrian alignment network (MAPAN) to learn discriminative feature representations. The multi-path network learns features directly from the data in an end-to-end manner and aligns the pedestrians adaptively without any additional manual annotations. To alleviate the intra-modality discrepancies caused by image misalignment, we combine the aligned visible image features with the original visible image features and enhance the attention of the network towards pedestrians, resulting in significant improvements in distinguishability of the learning features. To mitigate the cross-modality discrepancies between the visible domain and the infrared domain, the discriminative features of the two modalities are mapped to the same feature embedding space, and the identity loss as well as triplet loss is incorporated as the overall loss. Extensive experiments demonstrate the superior performance of proposed method compared to the state-of-the-arts.

Keywords