IEEE Access (Jan 2024)

FL-AGN: A Privacy-Enhanced Federated Learning Method Based on Adaptive Gaussian Noise for Resisting Gradient Inference Attacks

  • Zhifu Huang,
  • Zihao Wei,
  • Jinyang Wang

DOI
https://doi.org/10.1109/ACCESS.2024.3431031
Journal volume & issue
Vol. 12
pp. 101366 – 101373

Abstract

Read online

As well-known, the paradigm of federated learning (FL) operates on the principle that without centralizing data into a single server, server only trains and updates global models based on the local model from multiple clients. Compared with traditional machine learning, FL enables that data availability and invisibility and preserves the data security. However, in the process of FL, recently gradient inference attack can be launched by malicious attacker to grasp gradient information, and then infer sensitive information by analyzing this gradient information. By analyzing existing schemes for combating gradient inference attacks, it can be seen that the resistance effect is not good. Designing an effective method for resisting gradient inference attacks is a challenge. In this paper, a privacy-enhanced federated learning method is proposed to efficiently defend against gradient inference attacks and improve the accuracy of the training model. In this method, the Gaussian noise has been used to enhance the training model’s ability to resist gradient inference attacks, and the Adam gradient descent method has been applied to enhance the accuracy of the training model. Finally, this paper conducted experiments on the CIFAR-10 dataset and MNIST dataset, and the experimental results showed that compared to alternatives, the proposed method has stronger resistance to gradient inference attacks and higher model accuracy.

Keywords