Nihon Kikai Gakkai ronbunshu (Mar 2024)

Classification of young and elderly age group using gait feature extraction from foot videos

  • Junya KOBAYASHI,
  • Nobuaki NAKAZAWA

DOI
https://doi.org/10.1299/transjsme.23-00324
Journal volume & issue
Vol. 90, no. 932
pp. 23-00324 – 23-00324

Abstract

Read online

In recent years, Japan's declining birth rate and aging population have become serious problems. Under such circumstances, there is a concern about the shortage of caregivers in elderly care facilities, especially when looking after residents. Therefore, the introduction of camera-based monitoring systems in elderly care facilities could help reduce the burden on caregivers. However, caregivers may feel averse to being captured by surveillance cameras, so there is an issue of privacy. Our proposed solution to this problem is a camera-based monitoring system that focuses only on the feet of pedestrians. The purpose of this research is to extract gait features from videos in which only the pedestrian’s feet are visible by image processing and to classify young and elderly people using these features. First, the obtained gait feature image was converted into a flat feet image, which was generated by Otsu's binarization to extract the foot region that was completely in contact with the ground. Next, the heel position was estimated from the flat feet image and converted from the image coordinates to world coordinates using a perspective projection model. Finally, using the calculated heel position and estimated heel-strike and toe-off frames, gait features such as step length, stride length, gait cycle, and gait velocity were obtained. T-tests revealed significant differences in the step and stride lengths between young and elderly people. In addition, the gait velocity of elderly people tends to be lower than that of young people. These results support the findings of previous studies. Furthermore, a support vector machine was used to evaluate the classification accuracy for young and elderly people. According to the cross-validation results, the present system achieved an 82% accuracy.

Keywords