IEEE Access (Jan 2023)

Data Poisoning Attacks With Hybrid Particle Swarm Optimization Algorithms Against Federated Learning in Connected and Autonomous Vehicles

  • Chi Cui,
  • Haiping Du,
  • Zhijuan Jia,
  • Xiaofei Zhang,
  • Yuchu He,
  • Yanyan Yang

DOI
https://doi.org/10.1109/ACCESS.2023.3337638
Journal volume & issue
Vol. 11
pp. 136361 – 136369

Abstract

Read online

As a state-of-the-art distributed learning approach, federated learning has gained much popularity in connected and autonomous vehicles (CAVs). In federated learning, models are trained locally, and only model parameters instead of raw data are exchanged to aggregate a global model. Compared with traditional learning approaches, the enhanced privacy protection and relieved network bandwidth provided by federated learning make it more favorable in CAVs. On the other hand, poisoning attack, which can break the integrity of the trained model by injecting crafted perturbations to the training samples, has become a major threat to deep learning in recent years. It has been shown that the distributed nature of federated learning makes it more vulnerable to poisoning attacks. In view of this situation, the strategies and attacking methods of the adversaries are worth studying. In this paper, two novel optimization-based black-box and clean-label data poisoning attacking methods are proposed. Poisoning perturbations are generated using particle swarm optimization hybrid with simulated annealing and genetic algorithm respectively. The attacking methods are evaluated by experiments conducted on the example of traffic sign recognition system on CAVs, and the results show that the prediction accuracy of the global model is significantly downgraded even with a small portion of poisoned data using the proposed methods.

Keywords