IEEE Open Journal of the Communications Society (Jan 2022)

“Why Should I Trust Your IDS?”: An Explainable Deep Learning Framework for Intrusion Detection Systems in Internet of Things Networks

  • Zakaria Abou El Houda,
  • Bouziane Brik,
  • Lyes Khoukhi

DOI
https://doi.org/10.1109/OJCOMS.2022.3188750
Journal volume & issue
Vol. 3
pp. 1164 – 1176

Abstract

Read online

Internet of Things (IoT) is an emerging paradigm that is turning and revolutionizing worldwide cities into smart cities. However, this emergence is accompanied with several cybersecurity concerns due mainly to the data sharing and constant connectivity of IoT networks. To address this problem, multiple Intrusion Detection Systems (IDSs) have been designed as security mechanisms, which showed their efficiency in mitigating several IoT-related attacks, especially when using deep learning (DL) algorithms. Indeed, Deep Neural Networks (DNNs) significantly improve the detection rate of IoT-related intrusions. However, DL-based models are becoming more and more complex, and their decisions are hardly interpreted by users, especially companies’ executive staff and cybersecurity experts. Hence, the corresponding users cannot neither understand and trust DL models decisions, nor optimize their decisions (users) based on DL models outputs. To overcome these limits, Explainable Artificial Intelligence (XAI) is an emerging paradigm of Artificial Intelligence (AI), that provides a set of techniques to help interpreting and understanding predictions made by DL models. Thus, XAI enables to explain the decisions of DL-based IDSs to make them interpretable by cybersecurity experts. In this paper, we design a new XAI-based framework to give explanations to any critical DL-based decisions for IoT-related IDSs. Our framework relies on a novel IDS for IoT networks, that we also develop by leveraging deep neural network, to detect IoT-related intrusions. In addition, our framework uses three main XAI techniques ( $i.e.$ , RuleFit, Local Interpretable Model-Agnostic Explanations (LIME), and SHapley Additive exPlanations (SHAP)), on top of our DNN-based model. Our framework can provide both local and global explanations to optimize the interpretation of DL-based decisions. The local explanations target a single/particular DL output, while global explanations focus on deducing the most important features that have conducted to each made decision (e.g., intrusion detection). Thus, our proposed framework introduces more transparency and trust between the decisions made by our DL-based IDS model and cybersecurity experts. Both NSL-KDD and UNSW-NB15 datasets are used to validate the feasibility of our XAI framework. The experimental results show the efficiency of our framework to improve the interpretability of the IoT IDS against well-known IoT attacks, and help the cybersecurity experts get a better understanding of IDS decisions.

Keywords