Results in Control and Optimization (Mar 2023)

DCG: An efficient Distributed Conjugate Gradient algorithm for solving linear equations in multi-agent networks

  • Haodi Ping,
  • Yongcai Wang,
  • Chengcun Wei,
  • Jin Xi,
  • Taoming Zhang,
  • Yong Gao

Journal volume & issue
Vol. 10
p. 100213

Abstract

Read online

Distributed algorithms to solve linear equations in multi-agent networks have attracted great research attention and many iteration-based distributed algorithms have been developed. The convergence speed is a crucial factor of distributed algorithms, and it is shown to be dependent on the spectral radius of the iteration matrix. However, the iteration matrix is determined by the network structure and is difficult to pre-tune. Consequently, the iterative-based distributed algorithms converge slowly when the spectral radius is close to 1. In contrast, in centralized optimization, the Conjugate Gradient (CG) is a widely adopted idea to speed up the convergence of the centralized solvers, which can guarantee convergence in fixed steps. In this paper, we propose a distributed implementation of the Conjugate Gradient method, i.e., DCG. DCG is fully distributed, requiring only local communication and local computation. DCG also inherits the characteristic of fast convergence from CG. DCG guarantees to converge in 3Hnrounds, where H is the maximum hop number of the network and n is the number of nodes. The applications of DCG in solving the least square problem and network localization problem in both 2D space and 3D space are analyzed. The applicability of DCG in tracking mobile robot networks is also validated. The results show the convergence speed of DCG is several orders of magnitude faster than the traditional iteration-based distributed methods. In addition, we offer the codes at https://github.com/Haodi-Ping/DCG.

Keywords