IEEE Access (Jan 2019)

Deep Fusion Feature Presentations for Nonaligned Person Re-Identification

  • Meixia Fu,
  • Songlin Sun,
  • Na Chen,
  • Danshi Wang,
  • Xiaoyun Tong

DOI
https://doi.org/10.1109/ACCESS.2019.2920426
Journal volume & issue
Vol. 7
pp. 73253 – 73261

Abstract

Read online

Person re-identification aims to retrieve the pedestrian across different cameras. It is still a challenging task for the intelligent visual surveillance system because of similar appearances, camera shooting angles, scene illumination, and pedestrian pose. In this paper, we propose a novel two-stream network named spatial segmentation network that learns both the global and local features in a unified framework for nonaligned person re-identification. One stream focuses on spatial feature learning using global adaptive average pooling in deep convolutional neural networks. Another stream is utilized to learn the fine local features by adopting horizontal average pooling without division that depends on the pose predictor. To assess the importance ranking of all features, we also obtain the performance of every part feature and global features. Our evaluation of the proposed method on Market-1501 acquires 94.51% Rank-1 and 90.78% mAP, that on DukeMTMC-re-ID acquires 87.52% Rank-1 and 84.82% mAP, and that on CHUK03-detected acquires 69.71% Rank-1 and 71.67% mAP; these findings verify the state-of-the-art performance of the proposed method.

Keywords