Applied Mathematics in Science and Engineering (Dec 2023)

Learning rate selection in stochastic gradient methods based on line search strategies

  • Giorgia Franchini,
  • Federica Porta,
  • Valeria Ruggiero,
  • Ilaria Trombini,
  • Luca Zanni

DOI
https://doi.org/10.1080/27690911.2022.2164000
Journal volume & issue
Vol. 31, no. 1

Abstract

Read online

Finite-sum problems appear as the sample average approximation of a stochastic optimization problem and often arise in machine learning applications with large scale data sets. A very popular approach to face finite-sum problems is the stochastic gradient method. It is well known that a proper strategy to select the hyperparameters of this method (i.e. the set of a-priori selected parameters) and, in particular, the learning rate, is needed to guarantee convergence properties and good practical performance. In this paper, we analyse standard and line search based updating rules to fix the learning rate sequence, also in relation to the size of the mini batch chosen to compute the current stochastic gradient. An extensive numerical experimentation is carried out in order to evaluate the effectiveness of the discussed strategies for convex and non-convex finite-sum test problems, highlighting that the line search based methods avoid expensive initial setting of the hyperparameters. The line search based approaches have also been applied to train a Convolutional Neural Network, providing very promising results.

Keywords