IEEE Access (Jan 2023)
A Comparative Study of Situation Awareness-Based Decision-Making Model Reinforcement Learning Adaptive Automation in Evolving Conditions
Abstract
Situation-awareness-based decision-making (SABDM) models constructed using cognitive maps and goal-direct task analysis techniques have been successfully used in decision support systems in safety-critical and mission-critical environments such as air traffic control and electrical energy distribution. Reinforcement learning (RL) and other machine learning techniques are used to automate situational awareness mental model parameter adjustments, reducing expert work on the initial configuration and long-term maintenance without affecting the mental model’s structure and maintaining the situation-awareness-based decision-making model’s cognitive and explainability characteristics. Real-world models should evolve to cope with changes in the environmental conditions. This study evaluates the application of reinforcement learning as an online adaptive technique to adjust the situation-awareness mental model parameters under evolving conditions, a technique we named SABDM/RL. We conducted evaluation experiments using real-world public datasets to compare the performance of the SABDM/RL technique with that of other adaptive machine learning methods under distinct concept drift-evolving conditions. We measured the overall and dynamic performances of these techniques to understand how well they adapt to evolving environmental conditions. The experiments show that the SABDM/RL performs similarly to modern online adaptive machine learning classification methods with the support of concept drift detection techniques while maintaining the mental model strength of the situation-awareness-based decision-making systems.
Keywords