ETRI Journal (Apr 2022)

Video augmentation technique for human action recognition using genetic algorithm

  • Nudrat Nida,
  • Sang-Min Lee,
  • Young-Chul Kim

DOI
https://doi.org/10.4218/etrij.2019-0510
Journal volume & issue
Vol. 44, no. 2
pp. 327 – 338

Abstract

Read online

Classification models for human action recognition require robust features and large training sets for good generalization. However, data augmentation methods are employed for imbalanced training sets to achieve higher accuracy. These samples generated using data augmentation only reflect existing samples within the training set, their feature representations are less diverse and hence, contribute to less precise classification. This paper presents new data augmentation and action representation approaches to grow training sets. The proposed approach is based on two fundamental concepts: virtual video generation for augmentation and representation of the action videos through robust features. Virtual videos are generated from the motion history templates of action videos, which are convolved using a convolutional neural network, to generate deep features. Furthermore, by observing an objective function of the genetic algorithm, the spatiotemporal features of different samples are combined, to generate the representations of the virtual videos and then classified through an extreme learning machine classifier on MuHAVi-Uncut, iXMAS, and IAVID-1 datasets.

Keywords