IEEE Access (Jan 2025)

Machine Learning and Deep Learning Optimization Algorithms for Unconstrained Convex Optimization Problem

  • Kainat Naeem,
  • Amal Bukhari,
  • Ali Daud,
  • Tariq Alsahfi,
  • Bader Alshemaimri,
  • Mousa Alhajlah

DOI
https://doi.org/10.1109/ACCESS.2024.3522361
Journal volume & issue
Vol. 13
pp. 1817 – 1833

Abstract

Read online

This paper conducts a thorough comparative analysis of optimization algorithms for an unconstrained convex optimization problem. It contrasts traditional methods like Gradient Descent (GD) and Nesterov Accelerated Gradient (NAG) with modern techniques such as Adaptive Moment Estimation (Adam), Long Short-Term Memory (LSTM) and Multilayer Perceptron (MLP). Through empirical experiments, convergence speed, solution accuracy and robustness, is evaluated providing insights to aid algorithm selection. The convergence dynamics of convex optimization, is explored analyzing classical algorithms and contemporary neural network (NN) methodologies. The study concludes with a comparative assessment of these algorithms performance metrics and their respective strengths and weaknesses.

Keywords