Open Mathematics (Oct 2022)
On stochastic accelerated gradient with convergence rate
Abstract
This article studies the regression learning problem from given sample data by using stochastic approximation (SA) type algorithm, namely, the accelerated SA. We focus on problems without strong convexity, for which all well-known algorithms achieve a convergence rate for function values of O(1/n)O\left(1\hspace{0.1em}\text{/}\hspace{0.1em}n). We consider and analyze accelerated SA algorithm that achieves a rate of O(1/n)O\left(1\hspace{0.1em}\text{/}\hspace{0.1em}n) for classical least-square regression and logistic regression problems, respectively. Comparing with the well-known results, we only need fewer conditions to obtain the tight convergence rate for least-square regression and logistic regression problems.
Keywords