AIMS Mathematics (Sep 2024)
Distributed Newton method for time-varying convex optimization with backward Euler prediction
Abstract
We investigated the challenge of unconstrained distributed optimization with a time-varying objective function, employing a prediction-correction approach. Our method introduced a backward Euler prediction step that used the differential information from consecutive moments to forecast the trajectory's future direction. This predicted value was then refined through an iterative correction process. Our analysis and experimental results demonstrated that this approach effectively addresses the optimization problem without requiring the computation of the Hessian matrix's inverse.
Keywords