Computers (May 2022)

Algebraic Zero Error Training Method for Neural Networks Achieving Least Upper Bounds on Neurons and Layers

  • Juraj Kacur

DOI
https://doi.org/10.3390/computers11050074
Journal volume & issue
Vol. 11, no. 5
p. 74

Abstract

Read online

In the domain of artificial neural networks, it is important to know what their representation, classification and generalization capabilities are. There is also a need for time and resource-efficient training algorithms. Here, a new zero-error training method is derived for digital computers and single hidden layer networks. This method is the least upper bound on the number of hidden neurons as well. The bound states that if there are N input vectors expressed as rational numbers, a network having N − 1 neurons in the hidden layer and M neurons at the output represents a bounded function F: RD→RM for all input vectors. Such a network has massively shared weights calculated by 1 + M regular systems of linear equations. Compared to similar approaches, this new method achieves a theoretical least upper bound, is fast, robust, adapted to floating-point data, and uses few free parameters. This is documented by theoretical analyses and comparative tests. In theory, this method provides a new constructional proof of the least upper bound on the number of hidden neurons, extends the classes of supported activation functions, and relaxes conditions for mapping functions. Practically, it is a non-iterative zero-error training algorithm providing a minimum number of neurons and layers.

Keywords