IEEE Access (Jan 2024)
A Class Balanced Spatio-Temporal Self-Attention Model for Combat Intention Recognition
Abstract
To address the issue of model performance degradation in combat intention recognition caused by the long-tailed distribution of battlefield data and the neglect of the spatial dimension information of multivariate time series data, this paper proposes a class balanced spatio-temporal self-attention (CBSTSA) model. By incorporating spatial and temporal attention mechanisms, the model captures interdependencies among features and extracts salient information from both temporal and spatial dimensions. Furthermore, taking the long-tailed distribution of battlefield data into account, a re-weighted class balanced loss function is introduced to train the model. Experimental results show the superiority of our CBSTSA model, e.g. achieving approximately 95.67% accuracy in typical scenarios, surpassing benchmark schemes by 4–5%.
Keywords