IEEE Access (Jan 2023)
Reference-Based AI Decision Support for Cybersecurity
Abstract
In the cyber environment, massive amounts of data are generated daily. Artificial Intelligence (AI) technologies can effectively manage this vast data to support efficient operations in the cyber environment. Thanks to active research, AI has advanced significantly in this regard. However, as AI achieves higher performance, it becomes increasingly complex, which results in the low interpretability of AI outputs. This black-box nature of AI technology makes AI challenging to apply in fields like cybersecurity, where the risk of false positives is significant. To address this issue, researchers have been working on eXplainable Artificial Intelligence (XAI) technology, with the intention to enhance the utility of AI by providing interpretations of AI predictions. Most previous research has focused on understanding how models function in terms of feature importance to interpret AI results. However, this approach fails to provide clear interpretations in fields where interpretability is crucial, such as security. Therefore, this paper proposes a framework that offers interpretations of AI results, even in unsupervised environments that are suitable for security scenarios. Additionally, we have improved the logic of calculation Reference and have enhanced the function and performance compared with previous research. We provide additional information that supports interpretation, such as P-Values and References, to offer more effective decision support to security analysts and to ultimately reduce false alarms and enhance model performance. Overall, we aim to improve the model’s performance by providing clear interpretations that are suitable for security tasks, thereby contributing to more effective decision-making by security analysts.
Keywords