IEEE Access (Jan 2023)

Hybrid Distributed Optimization for Learning Over Networks With Heterogeneous Agents

  • Mohammad H. Nassralla,
  • Naeem Akl,
  • Zaher Dawy

DOI
https://doi.org/10.1109/ACCESS.2023.3317298
Journal volume & issue
Vol. 11
pp. 103530 – 103543

Abstract

Read online

This paper considers distributed optimization for learning problems over networks with heterogeneous agents having different computational capabilities. The heterogeneity of computational capabilities implies that a subset of the agents may run computationally-intensive learning algorithms like Newton’s method or full gradient descent, while the other agents can only run lower-complexity algorithms like stochastic gradient descent. This leads to opportunities for designing hybrid distributed optimization algorithms that rely on cooperation among the network agents in order to enhance overall performance, improve the rate of convergence, and reduce the communication overhead. We show in this work that hybrid learning with cooperation among heterogeneous agents attains a stable solution. For small step-sizes $\mu $ , the proposed approach leads to small estimation error in the order of O( $\mu $ ). We also provide the theoretical analysis of the stability of the first, second, and fourth order error moments for learning over networks with heterogeneous agents. Finally, results are presented and analyzed for case study scenarios to demonstrate the effectiveness of the proposed approach.

Keywords