Complex & Intelligent Systems (Aug 2023)

Parameter sharing and multi-granularity feature learning for cross-modality person re-identification

  • Sixian Chan,
  • Feng Du,
  • Tinglong Tang,
  • Guodao Zhang,
  • Xiaoliang Jiang,
  • Qiu Guan

DOI
https://doi.org/10.1007/s40747-023-01189-y
Journal volume & issue
Vol. 10, no. 1
pp. 949 – 962

Abstract

Read online

Abstract Visible-infrared person re-identification aims to match pedestrian images between visible and infrared modalities, and its two main challenges are intra-modality differences and cross-modality differences between visible and infrared images. To address these issues, many advanced methods attempt to design new network structures to extract modality-sharing features, mitigate modality differences, or learn part-level features to overcome background interference. However, they ignore the parameter sharing of the convolutional layers to obtain more modality-sharing features. At the same time, only using part-level features lack discriminative pedestrian representations such as body structure and contours. To handle these problems, a parameter sharing and feature learning network is proposed in this paper to mitigate modality differences and further enhance feature discrimination. Firstly, a new two-stream parameter sharing network is proposed, by sharing the convolutional layers parameters to obtain more modality-sharing features. Secondly, a multi-granularity feature learning module is designed to reduce modality differences at both coarse and fine-grained levels while further enhancing feature discriminability. In addition, a center alignment loss is proposed to learn relationships between identities and to reduce modality differences by clustering features into their centers. For the part-level feature learning, the hetero-center triplet loss is adopted to alleviate the strict constraints of triplet loss. Finally, extensive experiments are conducted to validate our method outperforms state-of-the-art methods on two challenging datasets. In the SYSU-MM01 dataset, the Rank-1 and mAP reach $$74.0\%$$ 74.0 % and $$70.51\%$$ 70.51 % in the all-search mode, which is an increase of $$3.4\%$$ 3.4 % and $$3.61\%$$ 3.61 % to baseline, respectively.

Keywords