PLoS ONE (Jan 2019)

Hierarchical multi-view aggregation network for sensor-based human activity recognition.

  • Xiheng Zhang,
  • Yongkang Wong,
  • Mohan S Kankanhalli,
  • Weidong Geng

DOI
https://doi.org/10.1371/journal.pone.0221390
Journal volume & issue
Vol. 14, no. 9
p. e0221390

Abstract

Read online

Sensor-based human activity recognition aims at detecting various physical activities performed by people with ubiquitous sensors. Different from existing deep learning-based method which mainly extracting black-box features from the raw sensor data, we propose a hierarchical multi-view aggregation network based on multi-view feature spaces. Specifically, we first construct various views of feature spaces for each individual sensor in terms of white-box features and black-box features. Then our model learns a unified representation for multi-view features by aggregating views in a hierarchical context from the aspect of feature level, position level and modality level. We design three aggregation modules corresponding to each level aggregation respectively. Based on the idea of non-local operation and attention, our fusion method is able to capture the correlation between features and leverage the relationship across different sensor position and modality. We comprehensively evaluate our method on 12 human activity benchmark datasets and the resulting accuracy outperforms the state-of-the-art approaches.