IEEE Access (Jan 2019)

Omnidirectional Feature Learning for Person Re-Identification

  • Di Wu,
  • Hong-Wei Yang,
  • De-Shuang Huang,
  • Chang-An Yuan,
  • Xiao Qin,
  • Yang Zhao,
  • Xin-Yong Zhao,
  • Jian-Hong Sun

DOI
https://doi.org/10.1109/ACCESS.2019.2901764
Journal volume & issue
Vol. 7
pp. 28402 – 28411

Abstract

Read online

Person re-identification (PReID) has received increasing attention due to it being an important role in intelligent surveillance. Many state-of-the-art PReID methods are part-based deep models. Most of these models focus on learning the part feature representation of a person's body from the horizontal direction. However, the feature representation of the body from the vertical direction is usually ignored. In addition, the relationships between these part features and different feature channels are not considered. In this paper, we introduce a multi-branch deep model for PReID. Specifically, the model consists of five branches. Among the five branches, two branches learn the part features with spatial information from horizontal and vertical orientations; one branch aims to learn the interdependencies between different feature channels generated by the last convolution layer of the backbone network; the remaining two branches are identification and triplet sub-networks in which the discriminative global feature and a corresponding measurement can be learned simultaneously. All five branches can improve the quality of representation learning. We conduct extensive comparison experiments on three benchmarks, including Market-1501, CUHK03, and DukeMTMC-reID. The proposed deep framework outperforms other competitive state-of-the-art methods. The code is available at https://github.com/caojunying/person-reidentification.

Keywords