IEEE Access (Jan 2024)

LLMT: A Transformer-Based Multi-Modal Lower Limb Human Motion Prediction Model for Assistive Robotics Applications

  • S. Hossein Sadat Hosseini,
  • Nader N. Joojili,
  • Mojtaba Ahmadi

DOI
https://doi.org/10.1109/ACCESS.2024.3413576
Journal volume & issue
Vol. 12
pp. 82730 – 82741

Abstract

Read online

Recognition of human intended motion is key to developing intelligent human-robot interaction (HRI) controllers in assistive devices. This study aims to develop a human motion recognition architecture tailored explicitly for real-time assistive robotics, such as exoskeletons and robot-assisted walking systems. We introduced a multi-modal lower limb modified transformer (LLMT), an architecture that bridges the gap in existing HRI technologies by defining a comprehensive set of relevant motions that generalize well for unseen subjects, ensuring adaptability and precision in diverse interaction scenarios. LLMT uses sparse multi-channel surface electromyography (sEMG) and Inertial Measurement Unit (IMU) signals to classify different motion patterns. The accuracy of the proposed method was compared with that of the classical machine learning (cML) models and a convolutional neural network (CNN). This comparison uses experimental data from seven human participants in two motion scenarios and a benchmark dataset. The validation methods included inter-subject, leave-one-subject-out, and intra-subject approaches. The proposed method demonstrated excellent accuracy, achieving $99.42 \pm 0.25\%$ , $99.07 \pm 0.32\%$ , and $97.08 \pm 1.16\%$ in inter-subject, leave-one-subject-out, and intra-subject validation methods on the collected and benchmark datasets, respectively. Additionally, it exhibited an average online prediction time of 84.09 ms within the recording loop.

Keywords