IEEE Open Journal of Signal Processing (Jan 2022)

Optimal Diffusion Learning Over Networks—Part I: Single-Task Algorithms

  • Ricardo Merched

DOI
https://doi.org/10.1109/OJSP.2022.3141968
Journal volume & issue
Vol. 3
pp. 107 – 127

Abstract

Read online

We revisit the theory of distributed networks of cooperative agents under a broader perspective of diffusion adaptation, by exploiting proximity concepts. This leads to two main families of algorithms with enhanced convergence rate and mean-square-error performance. Part I of this work considers mainly single-task scenarios, which are based on formulating optimal learning and fusion steps via an adaptive network penalty function. The main recursions, which we refer to as Adapt-and-Fuse (AAF) diffusion, are reminiscent of a reweighted network regularized algorithm, usually seen in standalone formulations. This is in line with early approaches that promote proximity among agents in cooperative networks. The AAF strategy employs exact fusion in the least-squares sense, and outperforms the exact global least-squares solution that ignores the topology of the network. It also suggests simplified LMS-complexity algorithms, and motivates us to develop a normalized version of the relative variance diffusion algorithm, which also learns combination weights. It is verified that even when agents do not share estimates, but only their uncertainties, the simplified AAF improves accuracy over the NLMS-RV algorithm in the presence of intruders, and becomes more robust to noisy links. In order to cope with the computational burden associated with long parameter vectors and correlated inputs, an overlapped block multidelay adaptive frequency-domain (FD) version of each new algorithm is derived. It turns out that for correlated inputs, these FD-LMS versions outperform the exact fullband RLS solutions. In the accompanying Part II of this work, we pursue extensions to the multitask scenario. Extensive simulations illustrate the superiority of the new approaches.

Keywords