IEEE Access (Jan 2019)
Human Activity Recognition Based on Motion Sensor Using U-Net
Abstract
Traditional human activity recognition (HAR) based on a motion sensor adopts sliding window labeling and prediction. This method faces the multi-class window problem, which mistakenly labels different classes of sampling points within a window as a class. In this paper, we propose a novel HAR method based on U-Net to overcome the multi-class problem, performing activity labeling and prediction of each sampling point. The motion sensor data collected from the wearable sensors are mapped into an image with the single-pixel column and multi-channel, and then, it is input into the U-Net network to complete the pixel-level activity recognition function. We design a complete HAR framework based on U-Net to realize the dense prediction of motion sensor data, including data preprocessing, dense prediction, and post analysis. In order to further improve the dense prediction performance, we propose the post-correction algorithm for the dense prediction results on the basis of the activity misalignment analysis. The extensive experimental results demonstrate that our U-Net method performs better than the traditional machine learning and deep learning methods based on the sliding window prediction. And it also outperforms full convolutional network (FCN), SegNet, and Mask R-CNN based on the dense prediction on the four datasets. Moreover, it also shows the better robustness and excellent performance of recognition on the short-term activities and minority classes. We release a new dataset named Sanitation, which includes seven types of daily work activity data of sanitation workers to evaluate the HAR algorithm's performance.
Keywords