Algorithms (Sep 2022)

Federated Optimization of <i>ℓ</i><sub>0</sub>-norm Regularized Sparse Learning

  • Qianqian Tong,
  • Guannan Liang,
  • Jiahao Ding,
  • Tan Zhu,
  • Miao Pan,
  • Jinbo Bi

DOI
https://doi.org/10.3390/a15090319
Journal volume & issue
Vol. 15, no. 9
p. 319

Abstract

Read online

Regularized sparse learning with the ℓ0-norm is important in many areas, including statistical learning and signal processing. Iterative hard thresholding (IHT) methods are the state-of-the-art for nonconvex-constrained sparse learning due to their capability of recovering true support and scalability with large datasets. The current theoretical analysis of IHT assumes the use of centralized IID data. In realistic large-scale scenarios, however, data are distributed, seldom IID, and private to edge computing devices at the local level. Consequently, it is required to study the property of IHT in a federated environment, where local devices update the sparse model individually and communicate with a central server for aggregation infrequently without sharing local data. In this paper, we propose the first group of federated IHT methods: Federated Hard Thresholding (Fed-HT) and Federated Iterative Hard Thresholding (FedIter-HT) with theoretical guarantees. We prove that both algorithms have a linear convergence rate and guarantee for recovering the optimal sparse estimator, which is comparable to classic IHT methods, but with decentralized, non-IID, and unbalanced data. Empirical results demonstrate that the Fed-HT and FedIter-HT outperform their competitor—a distributed IHT, in terms of reducing objective values with fewer communication rounds and bandwidth requirements.

Keywords