IEEE Access (Jan 2021)

Learning Human Activity From Visual Data Using Deep Learning

  • Taha Alhersh,
  • Heiner Stuckenschmidt,
  • Atiq Ur Rehman,
  • Samir Brahim Belhaouari

DOI
https://doi.org/10.1109/ACCESS.2021.3099567
Journal volume & issue
Vol. 9
pp. 106245 – 106253

Abstract

Read online

Advances in wearable technologies have the ability to revolutionize and improve people’s lives. The gains go beyond the personal sphere, encompassing business and, by extension, the global economy. The technologies are incorporated in electronic devices that collect data from consumers’ bodies and their immediate environment. Human activities recognition, which involves the use of various body sensors and modalities either separately or simultaneously, is one of the most important areas of wearable technology development. In real-life scenarios, the number of sensors deployed is dictated by practical and financial considerations. In the research for this article, we reviewed our earlier efforts and have accordingly reduced the number of required sensors, limiting ourselves to first-person vision data for activities recognition. Nonetheless, our results beat state of the art by more than 4% of F1 score.

Keywords