IEEE Access (Jan 2022)

Toward Efficient Hierarchical Federated Learning Design Over Multi-Hop Wireless Communications Networks

  • Tu Viet Nguyen,
  • Nhan Duc Ho,
  • Hieu Thien Hoang,
  • Cuong Danh Do,
  • Kok-Seng Wong

DOI
https://doi.org/10.1109/ACCESS.2022.3215758
Journal volume & issue
Vol. 10
pp. 111910 – 111922

Abstract

Read online

Federated learning (FL) has recently received considerable attention and is becoming a popular machine learning (ML) framework that allows clients to train machine learning models in a decentralized fashion without sharing any private dataset. In the FL framework, data for learning tasks are acquired and processed locally at the edge node, and only the updated ML parameters are transmitted to the central server for aggregation. However, because local FL parameters and the global FL model are transmitted over wireless links, wireless network performance will affect FL training performance. In particular, the number of resource blocks is limited; thus, the number of devices participating in FL is limited. Furthermore, edge nodes often have substantial constraints on their resources, such as memory, computation power, communication, and energy, severely limiting their capability to train large models locally. This paper proposes a two-hop communication protocol with a dynamic resource allocation strategy to investigate the possibility of bandwidth allocation from a limited network resource to the maximum number of clients participating in FL. In particular, we utilize an ordinary hierarchical FL with an adaptive grouping mechanism to select participating clients and elect a leader for each group based on its capability to upload the aggregated parameters to the central server. Our experimental results demonstrate that the proposed solution outperforms the baseline algorithm in terms of communication cost and model accuracy.

Keywords