Наукові вісті Національного технічного університету України "Київський політехнічний інститут" (Oct 2017)
Method of Encoding the Output Signal of Neural Networks Models
Abstract
Background. A significant drawback of the technology of creating modern neural network models based on the multilayer perceptron is that when the parameters of the case studies are encoded, the expected output signal correlation with the similarity of the class standards to be recognized is not taken into account. Objective. The aim of the paper is the development of the method for encoding the output of the case studies, which ensures the reflection of the similarity of the class standards to be recognized. Methods. The encoding method is based on a probabilistic neural network, in which case studies the expected output signal is determined not by numerical form but by the class name to be recognized. At the same time, when recognizing, it is possible in the numerical form of the output signal of the network to show the similarity of the input image to each class that was laid in it during the training. Results. The encoding method has been developed, which, due to the use of the probabilistic neural network, allows us to consider the similarity of the class standards to be recognized in the expected output signal of the case studies. Conclusions. The proposed method allows reducing the number of training iterations 1.3–1.5 times to achieve a tolerable learning error within 1 %.
Keywords