Mathematics (Jul 2024)

A Momentum-Based Adaptive Primal–Dual Stochastic Gradient Method for Non-Convex Programs with Expectation Constraints

  • Rulei Qi,
  • Dan Xue,
  • Yujia Zhai

DOI
https://doi.org/10.3390/math12152393
Journal volume & issue
Vol. 12, no. 15
p. 2393

Abstract

Read online

In this paper, we propose a stochastic primal-dual adaptive method based on an inexact augmented Lagrangian function to solve non-convex programs, referred to as the SPDAM. Different from existing methods, SPDAM incorporates adaptive step size and momentum-based search directions, which improve the convergence rate. At each iteration, an inexact augmented Lagrangian subproblem is solved to update the primal variables. A post-processing step is designed to adjust the primal variables to meet the accuracy requirement, and the adjusted primal variable is used to compute the dual variable. Under appropriate assumptions, we prove that the method converges to the ε-KKT point of the primal problem, and a complexity result of SPDAM less than O(ε−112) is established. This is better than the most famous O(ε−6) result. The numerical experimental results validate that this method outperforms several existing methods with fewer iterations and a lower running time.

Keywords