Alexandria Engineering Journal (Mar 2024)
A multi-channel hybrid deep learning framework for multi-sensor fusion enabled human activity recognition
Abstract
Smart and connected health (SCH) accelerates the development and integration of information science and engineering approaches to support the digital transformation of health and medicine in populated societies. Sensor data based human activity recognition (HAR) as an effective means of SCH is promising for healthcare monitoring and ambient assisted living. This paper focuses on multi-position sensor data fusion enabled HAR, and proposes a multi-channel deep learning framework. The main contributions include: (1) A multi-channel hybrid deep learning model (1DCNN-Att-BiLSTM) that merges a one-dimensional convolutional neural network, a bidirectional long short-term memory model, and an attention mechanism is proposed to significantly improve the recognition performance of HAR models by extracting and selecting local and global behavioral features in the spatial and temporal domains; (2) Publicly accessible datasets Shoaib AR, Shoaib SA, and HAPT are collected and processed to build multi-position sensor data pool for model's evaluation; (3) Thoroughly evaluating the classification performance of seven sensor data fusion patterns from different body positions within the parallel network structure of the multi-channel framework; (4) Comparing the recognition performance metrics of traditional machine learning models, deep learning models, and hybrid models with our proposed model. Extensive experiments demonstrate that our approach achieves competitive performance.