Mathematics (Oct 2023)

A High-Performance Federated Learning Aggregation Algorithm Based on Learning Rate Adjustment and Client Sampling

  • Yulian Gao,
  • Gehao Lu,
  • Jimei Gao,
  • Jinggang Li

DOI
https://doi.org/10.3390/math11204344
Journal volume & issue
Vol. 11, no. 20
p. 4344

Abstract

Read online

Federated learning is a distributed learning framework designed to protect user privacy, widely applied across various domains. However, existing federated learning algorithms face challenges, including slow convergence, significant loss fluctuations during aggregation, and imbalanced client sampling. To address these issues, this paper introduces a high-performance federated learning aggregation algorithm. This algorithm combines a cyclic adaptive learning rate adjustment strategy with client-weighted random sampling, addressing the aforementioned problems. Weighted random sampling assigns client weights based on their sampling frequency, balancing client sampling rates and contributions to enhance model aggregation. Additionally, it adapts the learning rate based on client loss variations and communication rounds, accelerating model convergence and reducing communication costs. To evaluate this high-performance algorithm, experiments are conducted using well-known datasets MNIST and CIFAR-10. The results demonstrate significant improvements in convergence speed and loss stability. Compared to traditional federated learning algorithms, our approach achieves faster and more stable convergence while effectively reducing training costs.

Keywords