IEEE Access (Jan 2024)

UniFL: Accelerating Federated Learning Using Heterogeneous Hardware Under a Unified Framework

  • Biyao Che,
  • Zixiao Wang,
  • Ying Chen,
  • Liang Guo,
  • Yuan Liu,
  • Yuan Tian,
  • Jizhuang Zhao

DOI
https://doi.org/10.1109/ACCESS.2023.3347521
Journal volume & issue
Vol. 12
pp. 582 – 598

Abstract

Read online

Federated learning (FL) is now considered a critical method for breaking down data silos. However, data encryption can significantly increase computing time, limiting its large-scale deployment. While hardware acceleration can be an effective solution, existing research has largely focused on a single hardware type, which hinders the acceleration of FL across the various heterogeneous hardware of the participants. In light of this challenge, this paper proposes a novel FL acceleration framework that supports diverse types of hardware. Firstly, we conduct an analysis of the key elements of FL to clarify our accelerator design goals. Secondly, a unified acceleration framework is proposed, which divides FL into four layers, providing a basis for the compatibility and implementation of heterogeneous hardware acceleration. After that, based on the physical properties of three mainstream acceleration hardware, i.e., GPU, ASIC and FPGA, the architecture design of corresponding heterogeneous accelerators under the framework is detailed. Finally, we validate the effectiveness of the proposed heterogeneous hardware acceleration framework through experiments. For specific algorithms, our implementation achieves a state of the art acceleration effect compared to previous work. For the end-to-end acceleration performance, we gain $12\times $ , $7.7\times $ and $2.2\times $ improvement on GPU, ASIC and FPGA respectively, compared to CPU in large-scale vertical linear regression training tasks.

Keywords