IET Control Theory & Applications (Nov 2021)

Almost sure convergence of randomised‐difference descent algorithm for stochastic convex optimisation

  • Xiaoxue Geng,
  • Gao Huang,
  • Wenxiao Zhao

DOI
https://doi.org/10.1049/cth2.12184
Journal volume & issue
Vol. 15, no. 17
pp. 2183 – 2194

Abstract

Read online

Abstract Stochastic gradient descent algorithm is a classical and useful method for stochastic optimisation. While stochastic gradient descent has been theoretically investigated for decades and successfully applied in machine learning such as training of deep neural networks, it essentially relies on obtaining the unbiased estimates of gradients/subgradients of the objective functions. In this paper, by constructing the randomised differences of the objective function, a gradient‐free algorithm, named the stochastic randomised‐difference descent algorithm, is proposed for stochastic convex optimisation. Under the strongly convex assumption of the objective function, it is proved that the estimates generated from stochastic randomised‐difference descent converge to the optimal value with probability one, and the convergence rates of both the mean square error of estimates and the regret functions are established. Finally, some numerical examples are prsented.

Keywords