Results in Control and Optimization (Jun 2024)

Controlled gradient descent: A control theoretical perspective for optimization

  • Revati Gunjal,
  • Syed Shadab Nayyer,
  • S.R. Wagh,
  • N.M. Singh

Journal volume & issue
Vol. 15
p. 100417

Abstract

Read online

The Gradient Descent (GD) paradigm is a foundational principle of modern optimization algorithms. The GD algorithm and its variants, including accelerated optimization algorithms, geodesic optimization, natural gradient, and contraction-based optimization, to name a few, are used in machine learning and the system and control domain. Here, we proposed a new algorithm based on the control theoretical perspective, labeled as the Controlled Gradient Descent (CGD). Specifically, this approach overcomes the challenges of the abovementioned algorithms, which rely on the choice of a suitable geometric structure, particularly in machine learning. The proposed CGD approach visualizes the optimization as a Manifold Stabilization Problem (MSP) through the notion of an invariant manifold and its attractivity. The CGD approach leads to an exponential contraction of trajectories under the influence of a pseudo-Riemannian metric generated through the control procedure as an additional outcome. The efficacy of the CGD is demonstrated with various test objective functions like the benchmark Rosenbrock function, objective function with a lack of flatness, and semi-contracting objective functions often encountered in machine learning applications.

Keywords