IEEE Access (Jan 2020)

A Class of Diffusion Zero Attracting Stochastic Gradient Algorithms With Exponentiated Error Cost Functions

  • Zhengyan Luo,
  • Haiquan Zhao,
  • Xiangping Zeng

DOI
https://doi.org/10.1109/ACCESS.2019.2961162
Journal volume & issue
Vol. 8
pp. 4885 – 4894

Abstract

Read online

In this paper, a class of diffusion zero-attracting stochastic gradient algorithms with exponentiated error cost functions is put forward due to its good performance for sparse system identification. Distributed estimation algorithms based on the popular mean-square error criterion have poor behavior for sparse system identification with color noise. To overcome this drawback, a class of stochastic gradient least exponentiated (LE) algorithms with exponentiated error cost functions were proposed, which achieved a low steady-state compared with the least mean square (LMS) algorithm. However, those LE algorithms may suffer from performance deterioration in the spare system. For sparse system identification in the adaptive network, a polynomial variable scaling factor improved diffusion least sum of exponentials (PZA-VSIDLSE) algorithm and an lp-norm constraint diffusion least exponentiated square (LP-DLE2) algorithm are proposed in this work. Instead of using the l1-norm penalty, an lp-norm penalty and a polynomial zero-attractor are employed as a substitution in the cost functions of the LE algorithms. Then, we perform mean behavior model and mean square behavior modal of the LP-DLE2 algorithm with several common assumptions. Moreover, simulations in the context of distributed network sparse system identification show that the proposed algorithms have a low steady-state compared with the existing algorithms.

Keywords