EURASIP Journal on Advances in Signal Processing (Sep 2020)

High-dimensional neural feature design for layer-wise reduction of training cost

  • Alireza M. Javid,
  • Arun Venkitaraman,
  • Mikael Skoglund,
  • Saikat Chatterjee

DOI
https://doi.org/10.1186/s13634-020-00695-2
Journal volume & issue
Vol. 2020, no. 1
pp. 1 – 19

Abstract

Read online

Abstract We design a rectified linear unit-based multilayer neural network by mapping the feature vectors to a higher dimensional space in every layer. We design the weight matrices in every layer to ensure a reduction of the training cost as the number of layers increases. Linear projection to the target in the higher dimensional space leads to a lower training cost if a convex cost is minimized. An ℓ 2-norm convex constraint is used in the minimization to reduce the generalization error and avoid overfitting. The regularization hyperparameters of the network are derived analytically to guarantee a monotonic decrement of the training cost, and therefore, it eliminates the need for cross-validation to find the regularization hyperparameter in each layer. We show that the proposed architecture is norm-preserving and provides an invertible feature vector and, therefore, can be used to reduce the training cost of any other learning method which employs linear projection to estimate the target.

Keywords