IEEE Access (Jan 2018)

Multi-View Gait Recognition Based on a Spatial-Temporal Deep Neural Network

  • Suibing Tong,
  • Yuzhuo Fu,
  • Xinwei Yue,
  • Hefei Ling

DOI
https://doi.org/10.1109/ACCESS.2018.2874073
Journal volume & issue
Vol. 6
pp. 57583 – 57596

Abstract

Read online

This paper proposes a novel spatial-temporal deep neural network (STDNN) that is applied to multi-view gait recognition. The STDNN comprises a temporal feature network (TFN) and a spatial feature network (SFN). In TFN, a feature sub-network is adopted to extract the low-level edge features of gait silhouettes. These features are input to the spatial-temporal gradient (STG) network that adopts a STG unit and a long short-term memory unit to extract the STG features. In SFN, the spatial features of gait sequences are extracted by multilayer convolutional neural networks from a gait energy image. The SFN is optimized by classification loss and verification loss jointly, which makes inter-class variations larger than intra-class variations. After training, the TFN and the SFN are employed to extract temporal and spatial features, respectively, which are applied to multi-view gait recognition. Finally, the combined predicted probability is adopted to identify individuals by the differences in their gaits. To evaluate the performance of the STDNN, extensive evaluations are carried out based on the CASIA-B, OU-ISIR, and CMU MoBo data sets. The best recognition scores achieved by STDNN are 95.67% under an identical view, 93.64% under a cross-view, and 92.54% under a multi-view. State-of-the-art approaches are compared with the STDNN in various situations. The results show that the STDNN outperforms the other methods and demonstrates the great potential of the STDNN for practical applications in the future.

Keywords