IET Control Theory & Applications (Mar 2021)

A conditional gradient algorithm for distributed online optimization in networks

  • Xiuyu Shen,
  • Dequan Li,
  • Runyue Fang,
  • Qiao Dong

DOI
https://doi.org/10.1049/cth2.12062
Journal volume & issue
Vol. 15, no. 4
pp. 570 – 579

Abstract

Read online

Abstract This paper addresses a network of computing nodes aiming to solve an online convex optimisation problem in a distributed manner, that is, by means of the local estimation and communication, without any central coordinator. An online distributed conditional gradient algorithm based on the conditional gradient is developed, which can effectively tackle the problem of high time complexity of the distributed online optimisation. The proposed algorithm allows the global objective function to be decomposed into the sum of the local objective functions, and nodes collectively minimise the sum of local time‐varying objective functions while the communication pattern among nodes is captured as a connected undirected graph. By adding a regularisation term to the local objective function of each node, the proposed algorithm constructs a new time‐varying objective function. The proposed algorithm also utilises the local linear optimisation oracle to replace the projection operation such that the regret bound of the algorithm can be effectively improved. By introducing the nominal regret and the global regret, the convergence properties of the proposed algorithm are also theoretically analysed. It is shown that, if the objective function of each agent is strongly convex and smooth, these two types of regrets grow sublinearly with the order of O(logT), where T is the time horizon. Numerical experiments also demonstrate the advantages of the proposed algorithm over existing distributed optimisation algorithms.

Keywords