IEEE Access (Jan 2024)

Adaptive Selection of Loss Function for Federated Learning Clients Under Adversarial Attacks

  • Suchul Lee

DOI
https://doi.org/10.1109/ACCESS.2024.3426534
Journal volume & issue
Vol. 12
pp. 96051 – 96062

Abstract

Read online

Federated learning (FL) is a deep learning paradigm that allows clients to train deep learning models distributively, keeping raw data local rather than sending it to the cloud, thereby reducing security and privacy concerns. Although FL is designed to be inherently secure, it still has many vulnerabilities. In this paper, we consider an FL scenario where clients are subjected to an adversarial attack that exploits vulnerabilities in the decision-making process of deep learning models to induce misclassification. We observed that adversarial training has a trade-off relationship in which, as classification performance for adversarial examples increases, classification performance for normal samples decreases. To effectively utilize this trade-off relationship in adversarial training, we propose an adaptive selection scheme of the loss function depending on whether the FL client is attacked. The proposed scheme was experimentally proven to achieve the best robust accuracy while minimizing the decrease in natural accuracy. Further, we combined the proposed scheme with Byzantine-robust aggregation. We expected model training to converge stably because Byzantine-robust aggregation prevents highly distorted models from being aggregated, but we obtained experimental results that were contrary to our expectations.

Keywords