Mathematics (Dec 2022)
A Lagrange Programming Neural Network Approach with an <i>ℓ</i><sub>0</sub>-Norm Sparsity Measurement for Sparse Recovery and Its Circuit Realization
Abstract
Many analog neural network approaches for sparse recovery were based on using ℓ1-norm as the surrogate of ℓ0-norm. This paper proposes an analog neural network model, namely the Lagrange programming neural network with ℓp objective and quadratic constraint (LPNN-LPQC), with an ℓ0-norm sparsity measurement for solving the constrained basis pursuit denoise (CBPDN) problem. As the ℓ0-norm is non-differentiable, we first use a differentiable ℓp-norm-like function to approximate the ℓ0-norm. However, this ℓp-norm-like function does not have an explicit expression and, thus, we use the locally competitive algorithm (LCA) concept to handle the nonexistence of the explicit expression. With the LCA approach, the dynamics are defined by the internal state vector. In the proposed model, the thresholding elements are not conventional analog elements in analog optimization. This paper also proposes a circuit realization for the thresholding elements. In the theoretical side, we prove that the equilibrium points of our proposed method satisfy Karush Kuhn Tucker (KKT) conditions of the approximated CBPDN problem, and that the equilibrium points of our proposed method are asymptotically stable. We perform a large scale simulation on various algorithms and analog models. Simulation results show that the proposed algorithm is better than or comparable to several state-of-art numerical algorithms, and that it is better than state-of-art analog neural models.
Keywords