IEEE Open Journal of the Communications Society (Jan 2024)

Securing 5G/6G IoT Using Transformer and Personalized Federated Learning: An Access-Side Distributed Malicious Traffic Detection Framework

  • Yantian Luo,
  • Xu Chen,
  • Hancun Sun,
  • Xiangling Li,
  • Ning Ge,
  • Wei Feng,
  • Jianhua Lu

DOI
https://doi.org/10.1109/OJCOMS.2024.3365976
Journal volume & issue
Vol. 5
pp. 1325 – 1339

Abstract

Read online

Malicious traffic has posed a significant threat to current 5G networks. In the upcoming 6G era, with the rapid development of the Internet of Things (IoT), defending against malicious traffic has become even more challenging due to the diverse nature and widespread distribution of IoT devices. This paper presents a new distributed framework for detecting malicious traffic on the access side of the IoT. This framework shifts the line of defense forward, and could alleviate the resource-bottleneck problem of traditional detectors that are usually deployed at the victim side. In particular, we devise a transformer-based neural network, which is tailored for distributed malicious traffic detection at multiple detection points. Additionally, we develop a personalized federated learning-based collaborative algorithm to enable horizontal collaboration among multiple detection points by sharing neural network parameters. Different from the traditional federated learning framework which trains a high-precision global model, our proposed framework fully considers the differences in the distribution of IoT traffic at the entrance, and uses local traffic data to train personalized models with better local detection performance on the basis of the global model. The experimental results demonstrate that our approach achieves an average detection accuracy of 99.2% and an F1-score of 99.2% for all detection points using the N-BaIoT dataset. In comparison to methods lacking collaboration, our approach exhibits significant improvements in terms of accuracy, precision, recall, and F1-score. Moreover, our detection performance is comparable to that of centralized learning frameworks, despite sharing only the model parameters.

Keywords