Journal of Applied Science and Engineering (Jan 2025)

SparseBatch: Communication-efficient Federated Learning with Partially Homomorphic Encryption

  • Chong Wang,
  • Jing Wang,
  • Zheng Lou,
  • Linghai Kong,
  • WeiSong Tao,
  • Yun Wang

DOI
https://doi.org/10.6180/jase.202508_28(8).0003
Journal volume & issue
Vol. 28, no. 8
pp. 1645 – 1656

Abstract

Read online

Cross-silo federated learning (FL) enables collaborative model training among various organizations (e.g., financial or medical). It operates by aggregating local gradient updates contributed by participating clients, all the while safeguarding the privacy of sensitive data. Industrial FL frameworks employ additively homomorphic encryption (HE) to ensure that local gradient updates are masked during aggregation, guaranteeing no update is revealed. However, this measure has resulted in significant computational and communication overhead. Encryption and decryption operations have occupied the majority of the training time. In addition, the bit length of ciphertext is two orders of magnitude larger than that of plaintext, inflating the data transfer amount. In this paper, we present a new gradient sparsification method, SparseBatch. By designing a new general gradient correction method and using Lion optimizer’s gradient quantization method, SparseBatch combines gradient sparsification and quantization. Experimental results show that compared with BatchCrypt, SparseBatch reduces the computation and communication overhead by 5×, and the accuracy reduction is less than 1

Keywords