IEEE Open Journal of Signal Processing (Jan 2021)

Sharpening Sparse Regularizers via Smoothing

  • Abdullah H. Al-Shabili,
  • Yining Feng,
  • Ivan Selesnick

DOI
https://doi.org/10.1109/OJSP.2021.3104497
Journal volume & issue
Vol. 2
pp. 396 – 409

Abstract

Read online

Non-convex sparsity-inducing penalties outperform their convex counterparts, but generally sacrifice the cost function convexity. As a middle ground, we propose the sharpening sparse regularizers (SSR) framework to design non-separable non-convex penalties that induce sparsity more effectively than convex penalties such as $\ell _1$ and nuclear norms, but without sacrificing the cost function convexity. The overall problem convexity is preserved by exploiting the data fidelity relative strong convexity. The framework constructs penalties as the difference of convex functions, namely the difference between convex sparsity-inducing penalties and their smoothed versions. We propose a generalized infimal convolution smoothing technique to obtain the smoothed versions. Furthermore, SSR recovers and generalizes several non-convex penalties in the literature as special cases. The SSR framework is applicable to any sparsity regularized least squares ill-posed linear inverse problem. Beyond regularized least squares, the SSR framework can be extended to accommodate Bregman divergence, and other sparsity structures such as low-rankness. The SSR optimization problem can be formulated as a saddle point problem, and solved by a scalable forward-backward splitting algorithm. The effectiveness of the SSR framework is demonstrated by numerical experiments in different applications.

Keywords