IEEE Access (Jan 2019)

An End-to-End Multi-Task and Fusion CNN for Inertial-Based Gait Recognition

  • Ruben Delgado-Escano,
  • Francisco M. Castro,
  • Julian Ramos Cozar,
  • Manuel J. Marin-Jimenez,
  • Nicolas Guil

DOI
https://doi.org/10.1109/ACCESS.2018.2886899
Journal volume & issue
Vol. 7
pp. 1897 – 1908

Abstract

Read online

People identification using gait information (i.e., the way a person walks) obtained from inertial sensors is a robust approach that can be used in multiple situations where vision-based systems are not applicable. Typically, previous methods use hand-crafted features or deep learning approaches with pre-processed features as input. In contrast, we present a new deep learning-based end-to-end approach that employs raw inertial data as input. By this way, our approach is able to automatically learn the best representations without any constraint introduced by the pre-processed features. Moreover, we study the fusion of information from multiple inertial sensors and multi-task learning from multiple labels per sample. Our proposal is experimentally validated on the challenging dataset OU-ISIR, which is the largest available dataset for gait recognition using inertial information. After conducting an extensive set of experiments to obtain the best hyper-parameters, our approach is able to achieve state-of-the-art results. Specifically, we improve both the identification accuracy (from 83.8% to 94.8%) and the authentication equal-error-rate (from 5.6 to 1.1). Our experimental results suggest that: 1) the use of hand-crafted features is not necessary for this task as deep learning approaches using raw data achieve better results; 2) the fusion of information from multiple sensors allows to improve the results; and, 3) multi-task learning is able to produce a single model that obtains similar or even better results in multiple tasks than the corresponding models trained for a single task.

Keywords