Discover Internet of Things (Feb 2025)

Securing federated learning: a defense strategy against targeted data poisoning attack

  • Ansam Khraisat,
  • Ammar Alazab,
  • Moutaz Alazab,
  • Tony Jan,
  • Sarabjot Singh,
  • Md. Ashraf Uddin

DOI
https://doi.org/10.1007/s43926-025-00108-6
Journal volume & issue
Vol. 5, no. 1
pp. 1 – 17

Abstract

Read online

Abstract Ensuring the security and integrity of Federated Learning (FL) models against adversarial attacks is critical. Among these threats, targeted data poisoning attacks, particularly label flipping, pose a significant challenge by undermining model accuracy and reliability. This paper investigates targeted data poisoning attacks in FL systems, where a small fraction of malicious participants corrupt the global model through mislabeled data updates. Our findings demonstrate that even a minor presence of malicious participants can substantially decrease classification accuracy and recall, especially when attacks focus on specific classes. We also examine the longevity and timing of these attacks during early and late training rounds, highlighting the impact of malicious participant availability on attack effectiveness. To mitigate these threats, we propose a defense strategy that identifies malicious participants by analyzing parameter updates across vulnerable training rounds. Utilizing Principal Component Analysis (PCA) for dimensionality reduction and anomaly detection, our approach effectively isolates malicious updates. Extensive simulations on standard datasets validate the effectiveness of our algorithm in accurately identifying and excluding malicious participants, thereby enhancing the integrity of the FL model. These results offer a robust defense against sophisticated poisoning strategies, significantly improving FL security.

Keywords