Results in Control and Optimization (Mar 2024)
Efficient hybrid conjugate gradient techniques for vector optimization
Abstract
Scalarization approaches transform vector optimization problems (VOPs) into single-objective optimization but have trade-offs: information loss, subjective weight assignments, and limited representation of the Pareto front. To address these limitations, alternative strategies like conjugate gradient (CG) techniques are valuable for their simplicity and less memory usage. The paper introduces three CG techniques for VOPs, including two CG techniques that satisfy sufficient descent conditions (SDC) without a line search. These two CG techniques are combined with the third CG technique, a variant of the Polak–Ribiére–Polyak (PRP) technique, resulting in two hybrid CG techniques. Global convergence of these hybrids is achieved, without convexity assumptions, under standard assumptions and Wolfe line search. Numerical analysis and comparisons with nonnegative PRP and Liu–Storey (LS) CG techniques showcase the implementation and effectiveness of our hybrid CG techniques. The results demonstrate the promise of our hybrids techniques.