Symmetry (Jun 2021)

Principal Component Wavelet Networks for Solving Linear Inverse Problems

  • Bernard Tiddeman,
  • Morteza Ghahremani

DOI
https://doi.org/10.3390/sym13061083
Journal volume & issue
Vol. 13, no. 6
p. 1083

Abstract

Read online

In this paper we propose a novel learning-based wavelet transform and demonstrate its utility as a representation in solving a number of linear inverse problems—these are asymmetric problems, where the forward problem is easy to solve, but the inverse is difficult and often ill-posed. The wavelet decomposition is comprised of the application of an invertible 2D wavelet filter-bank comprising symmetric and anti-symmetric filters, in combination with a set of 1×1 convolution filters learnt from Principal Component Analysis (PCA). The 1×1 filters are needed to control the size of the decomposition. We show that the application of PCA across wavelet subbands in this way produces an architecture equivalent to a separable Convolutional Neural Network (CNN), with the principal components forming the 1×1 filters and the subtraction of the mean forming the bias terms. The use of an invertible filter bank and (approximately) invertible PCA allows us to create a deep autoencoder very simply, and avoids issues of overfitting. We investigate the construction and learning of such networks, and their application to linear inverse problems via the Alternating Direction of Multipliers Method (ADMM). We use our network as a drop-in replacement for traditional discrete wavelet transform, using wavelet shrinkage as the projection operator. The results show good potential on a number of inverse problems such as compressive sensing, in-painting, denoising and super-resolution, and significantly close the performance gap with Generative Adversarial Network (GAN)-based methods.

Keywords