IEEE Access (Jan 2024)

Automated Classification of Virtual Reality User Motions Using a Motion Atlas and Machine Learning Approach

  • Pawel Pieta,
  • Hubert Jegierski,
  • Pawel Babiuch,
  • Maciej Jegierski,
  • Miroslaw Plaza,
  • Grzegorz Lukawski,
  • Stanislaw Deniziak,
  • Artur Jasinski,
  • Jacek Opalka,
  • Pawel Wegrzyn,
  • Magdalena Igras-Cybulska,
  • Adrian Lapczynski

DOI
https://doi.org/10.1109/ACCESS.2024.3424930
Journal volume & issue
Vol. 12
pp. 94584 – 94609

Abstract

Read online

A novel motion atlas consisting of 56 different motions was constructed to meet needs of virtual reality (VR) video games. Within the atlas four motion categories were defined: head movements (HEAD), hand and arm movements (ARMS), whole body movements (BODY), and animations (ANIM). The data identifying the motion patterns were collected exclusively using VR system peripherals, namely goggles and controllers – for motion capture (MoCap) purposes, the HTC Vive Pro and Meta Quest 2 devices were used. By employing popular machine learning (ML) architectures, 300 motion recognition models were trained, and the most effective ones were selected. The study included classical algorithms such as k-nearest neighbors (kNN), logistic regression (LR), support vector machine (SVM), decision tree (DT), extra-trees classifier (Ensemble), random forests (RF), naive Bayes classifier (NB), and LightGBM (LGBM), which were selected based on literature review. Deep learning (DL) algorithms were also tested: convolutional neural network (CNN), transformer, and long-short-term memory (LSTM). Despite the significantly larger size of the motion atlas compared to other approaches and the limitation to naturally available data within VR systems, the best obtained CNN model achieved a weighted F-score of nearly 98% for motion recognition.

Keywords