Entropy (Aug 2024)

Cross-Modality Person Re-Identification Method with Joint-Modality Generation and Feature Enhancement

  • Yihan Bi,
  • Rong Wang,
  • Qianli Zhou,
  • Zhaolong Zeng,
  • Ronghui Lin,
  • Mingjie Wang

DOI
https://doi.org/10.3390/e26080681
Journal volume & issue
Vol. 26, no. 8
p. 681

Abstract

Read online

In order to minimize the disparity between visible and infrared modalities and enhance pedestrian feature representation, a cross-modality person re-identification method is proposed, which integrates modality generation and feature enhancement. Specifically, a lightweight network is used for dimension reduction and augmentation of visible images, and intermediate modalities are generated to bridge the gap between visible images and infrared images. The Convolutional Block Attention Module is embedded into the ResNet50 backbone network to selectively emphasize key features sequentially from both channel and spatial dimensions. Additionally, the Gradient Centralization algorithm is introduced into the Stochastic Gradient Descent optimizer to accelerate convergence speed and improve generalization capability of the network model. Experimental results on SYSU-MM01 and RegDB datasets demonstrate that our improved network model achieves significant performance gains, with an increase in Rank-1 accuracy of 7.12% and 6.34%, as well as an improvement in mAP of 4.00% and 6.05%, respectively.

Keywords