IEEE Access (Jan 2019)

Learning Robust Auto-Encoders With Regularizer for Linearity and Sparsity

  • Yong Shi,
  • Minglong Lei,
  • Rongrong Ma,
  • Lingfeng Niu

DOI
https://doi.org/10.1109/ACCESS.2019.2895884
Journal volume & issue
Vol. 7
pp. 17195 – 17206

Abstract

Read online

Unsupervised feature learning via auto-encoders results in low-dimensional representations in latent space that capture the patterns of input data. The auto-encoders with robust regularization learn qualified features that are less sensitive to small perturbations of inputs. However, the previous robust auto-encoders highly depend on pre-defined structure settings and often learn full-connected networks that are easily prone to over-fitting. To solve the above limitations, we propose in this paper an explicitly regularized framework which improves the sparsity and flexibility of robust auto-encoders. First, our model encourages the activation functions to automatically adjust themselves between linear and non-linear ones. Second, the mapping functions of the encoder are constrained by group sparsity and exclusive sparsity to reduce the redundancy of parameters. The proximal gradient method is used to optimize our model since the objective function contains non-smooth components. We conduct experiments in single-layer and multiple-layer auto-encoders in the classification task. The numerical results show that our model achieves better accuracy than baseline models. Our method also shows better performance in denoising task.

Keywords