Multimodal Technologies and Interaction (Jun 2020)
A Super-Bagging Method for Volleyball Action Recognition Using Wearable Sensors
Abstract
Access to performance data during matches and training sessions is important for coaches and players. Although there are many video tagging systems available which can provide such access, these systems require manual effort. Data from Inertial Measurement Units (IMU) could be used for automatically tagging video recordings in terms of players’ actions. However, the data gathered during volleyball sessions are generally very imbalanced, since for an individual player most time intervals can be classified as “non-actions” rather than “actions”. This makes automatic annotation of video recordings of volleyball matches a challenging machine-learning problem. To address this problem, we evaluated balanced and imbalanced learning methods with our newly proposed ‘super-bagging’ method for volleyball action modelling. All methods are evaluated using six classifiers and four sensors (i.e., accelerometer, magnetometer, gyroscope and barometer). We demonstrate that imbalanced learning provides better unweighted average recall, (UAR = 83.99%) for the non-dominant hand using a naive Bayes classifier than balanced learning, while balanced learning provides better performance (UAR = 84.18%) for the dominant hand using a tree bagger classifier than imbalanced learning. Our super-bagging method provides the best UAR (84.19%). It is also noted that the super-bagging method provides better averaged UAR than balanced and imbalanced methods in 8 out of 10 cases, hence demonstrating the potential of the super-bagging method for IMU’s sensor data. One of the potential applications of these novel models is fatigue and stamina estimation e.g., by keeping track of how many actions a player is performing and when these are being performed.
Keywords