Journal of King Saud University: Computer and Information Sciences (Oct 2023)

Federated learning with hyper-parameter optimization

  • Majid Kundroo,
  • Taehong Kim

Journal volume & issue
Vol. 35, no. 9
p. 101740

Abstract

Read online

Federated Learning is a new approach for distributed training of a deep learning model on data scattered across a large number of clients while ensuring data privacy. However, this approach faces certain limitations, including a longer convergence time compared to typical deep learning models. Existing federated optimization algorithms often employ the same hyper-parameters for all clients, disregarding potential system heterogeneity and varying local data availability, which contributes to an even longer convergence time and more communication rounds. To address this challenge, we propose FedHPO, a new federated optimization algorithm that adaptively modifies hyper-parameters of each client’s local model during training, such as learning rate and epochs. This adaptability facilitates quicker convergence of the client’s local model, which in turn helps the global model converge faster, consequently reducing overall convergence time and the required communication rounds. In addition, FedHPO does not require any additional complexity since each client adjusts hyper-parameters independently based on the training results obtained in each epoch. In our evaluation, we compare FedHPO with other algorithms, such as FedAVG, FedAVGM, FedProx, and FedYogi, using both IID and non-IID distributed datasets. The results demonstrate the promising outcomes of FedHPO, showcasing reduced convergence time and fewer required communication rounds in comparison to alternative algorithms.

Keywords