Applied Sciences (Jun 2024)
No Pictures, Please: Using eXplainable Artificial Intelligence to Demystify CNNs for Encrypted Network Packet Classification
Abstract
Real-time traffic classification is one of the most important challenges for both Internet Service Providers and users, because correct traffic policing and planning allow for proper optimization of the network resources. However, there is no perfect solution for this problem, due to the grade of complexity of modern traffic. Nowadays, convolutional neural networks (CNNs) are believed to be the miraculous solution for network packet classification of encrypted traffic. Nevertheless, given the obscure nature of deep learning, an appropriate explanation could not be easily obtained on how the model detects each traffic category. In this paper, we present an analysis on some popular CNN-based models for network packet classification, focusing on how the model works and how it was implemented, trained, and tested. By using eXplainable Artificial Intelligence (XAI), we are able to extract the most important regions of the models and extract some reasoning to justify their decisions. Moreover, in the process, we look for possible flawed methodologies that can lead to data leakage or an unrealistic performance evaluation. The results show that CNNs mainly focus on the packet length to make a decision, which is definitely a waste of resources. As we also check, the same could also be implemented with simpler machine learning models, such as decision trees. Our findings indicate that poor experimental protocols result in an unrealistic performance evaluation. Moreover, XAI techniques are of great help in the assessment of the model, showing that CNNs do not detect significant features in encrypted payloads apart from packet length.
Keywords