Symmetry (Nov 2024)

<i>FLSAD:</i> Defending Backdoor Attacks in Federated Learning via Self-Attention Distillation

  • Lucheng Chen,
  • Xiaoshuang Liu,
  • Ailing Wang,
  • Weiwei Zhai,
  • Xiang Cheng

DOI
https://doi.org/10.3390/sym16111497
Journal volume & issue
Vol. 16, no. 11
p. 1497

Abstract

Read online

Federated Learning (FL), as a distributed machine learning framework, can effectively learn symmetric and asymmetric patterns from large-scale participants. However, FL is susceptible to malicious backdoor attacks through attackers injecting triggers into the backdoored model, resulting in backdoor samples being misclassified as target classes. Due to the stealthy nature of backdoor attacks in FL, it is difficult for users to discover the symmetric and asymmetric backdoor properties. Currently, backdoor defense methods in FL cause model performance degradation while reducing backdoors. In addition, some methods will assume the existence of clean samples, which does not match the realistic scenarios. To address such issues, we propose FLSAD, an effective backdoor defense method in FL via self-attention distillation. FLSAD can recover the triggers using an entropy maximization estimator. Based on the recovered triggers, we leverage the self-attention distillation to eliminate the backdoor. Compared with the baseline backdoor defense methods, FLSAD can reduce the success rates of different state-of-the-art backdoor attacks to 2% on four real-world datasets through extensive evaluation.

Keywords