IEEE Access (Jan 2024)

Combined Activity Recognition Based on Continuous-Wave Radar and Vision Transformer

  • Junhao Zhou,
  • Chao Sun,
  • Youngok Kim

DOI
https://doi.org/10.1109/ACCESS.2024.3514140
Journal volume & issue
Vol. 12
pp. 185448 – 185459

Abstract

Read online

Emergency scenario recognition has diverse applications, including monitoring older adults living alone and detecting crimes. Predominantly, emergency scenario recognition methods rely on cameras, presenting privacy issues, especially in bathrooms and homes. Additionally, most studies concentrate on recognizing simple isolated activity such as falling down and lying down. However, real-life emergency scenarios often more than just isolated activity. This paper introduces a method for classifying emergency scenarios, including falling down, lying down, as well as the combined activities such as falling down and standing up, lying down and standing up, falling down and crawling, and lying down and crawling. Falling and lying down are similar scenarios, and we consider that the people may stand up or crawl afterward. Our approach hinges on the micro-Doppler features derived from continuous-wave radar signals. We first process the human-generated radar echo signals to produce micro-Doppler spectrogram images. Subsequently, we propose an AI model utilizing multi-head attention mechanisms for image training and testing, demonstrating strong performance on our small datasets while reducing computational complexity compared to precious vision transformer models. Preliminary results highlight the efficacy of our approach, with an average recognition accuracy exceeding 95%.

Keywords