IEEE Access (Jan 2019)

A Hybrid Deep Learning Model for Human Activity Recognition Using Multimodal Body Sensing Data

  • Abdu Gumaei,
  • Mohammad Mehedi Hassan,
  • Abdulhameed Alelaiwi,
  • Hussain Alsalman

DOI
https://doi.org/10.1109/ACCESS.2019.2927134
Journal volume & issue
Vol. 7
pp. 99152 – 99160

Abstract

Read online

Human activity recognition from multimodal body sensor data has proven to be an effective approach for the care of elderly or physically impaired people in a smart healthcare environment. However, traditional machine learning techniques are mostly focused on a single sensing modality, which is not practical for robust healthcare applications. Therefore, recently increasing attention is being given by the researchers on the development of robust machine learning techniques that can exploit multimodal body sensor data and provide important decision making in Smart healthcare. In this paper, we propose an effective multi-sensors-based framework for human activity recognition using a hybrid deep learning model, which combines the simple recurrent units (SRUs) with the gated recurrent units (GRUs) of neural networks. We use the deep SRUs to process the sequences of multimodal input data by using the capability of their internal memory states. Moreover, we use the deep GRUs to store and learn how much of the past information is passed to the future state for solving fluctuations or instability in accuracy and vanishing gradient problems. The system has been compared against the conventional approaches on a publicly available standard dataset. The experimental results show that the proposed approach outperforms the available state-of-the-art methods.

Keywords