Gazi Üniversitesi Fen Bilimleri Dergisi (Jun 2019)

A Survey of Hyper-parameter Optimization Methods in Convolutional Neural Networks

  • Ayla GÜLCÜ,
  • Zeki KUŞ

DOI
https://doi.org/10.29109/gujsc.514483
Journal volume & issue
Vol. 7, no. 2
pp. 503 – 522

Abstract

Read online

Convolutional neural networks (CNN) are special types of multi-layer artificial neural networks in which convolution method is used instead of matrix multiplication in at least one of its layers. Although satisfactory results have been achieved by CNN especially in computer vision studies, they still have some difficulties. As the proposed network architectures become deeper with the aim of much better accuracy and the resolution of the input images increases, this results in a need for more computational power. Reducing the computational cost while at the same time still having high accuracy rates depend on the use of powerful equipments and the selection of hyper-parameter values in CNN. In this study, we examined methods like Genetic Algorithms, Particle Swarm Optimization, Differential Evolution and Bayes Optimization that has been used extensively to optimize CNN hyperparameters, and also listed the hyper-parameters selected to be optimized in those studies, ranges of those parameter values and the results obtained by each of those studies. These studies reveal that the number of layers, number and size of the kernels at each layer, learning rate and the batch size parameters are among the hyper-parameters that affect the performance of the CNNs the most. When the studies that use the same datasets are compared in terms of accuracy, Genetic Algorithms and Particle Swarm Optimization which are both population-based methods achieve the best results for the majority of the datasets. It is also shown that the performance of the models found in these studies are competitive or sometimes better than those of the “state of the art” models. In addition, the CNNs produced in these studies are prevented from being overgrown by imposing limits on the hiperparameter values. Thus simpler and easier to train models have been obtained. These computationally advantageous simpler models were able to achieve competitive results compared to complicated models.

Keywords