Sensors & Transducers (May 2023)

Pruning Feedforward Polynomial Neural with Smoothing Elastic Net Regularization

  • Khidir Shaib Mohamed

Journal volume & issue
Vol. 260, no. 1
pp. 14 – 23

Abstract

Read online

Gradient methods are preferred for training and pruning neural networks because regularization terms are primarily intended to remove redundant weights from neural networks. Many machine learning libraries use elastic net regularization (ENR) also called double regularization, which is a combination of and regularizations which tends to have a grouping effect in which correlated input features are given equal weights. This paper proposes a batch gradient method with smoothing elastic net regularization for pruning feedforward polynomial neural networks (FFPNNs), especially pi-sigma neural networks (PSNNs). Unfortunately, since elastic net regularization contains the 1-norm, is non-differentiable, and does not produce an NP-hard problem, it is not possible to use the gradient method directly. We attempt to replace the 1-norm and end up with the smoothing elastic net regularization in order to overcome this obstacle by using a differentiable and continuous function. The monotonicity theorem and two convergence theorems, including a weak convergence and a strong convergence, are established under this circumstance. The validity of the proposed theorems is supported by the experimental findings. According to the numerical results, the smoothing double regularization improved generalization performance and accelerated the learning process.

Keywords