Mathematical Biosciences and Engineering (Jun 2024)

Hyperparameter optimization: Classics, acceleration, online, multi-objective, and tools

  • Jia Mian Tan ,
  • Haoran Liao,
  • Wei Liu ,
  • Changjun Fan,
  • Jincai Huang,
  • Zhong Liu ,
  • Junchi Yan

DOI
https://doi.org/10.3934/mbe.2024275
Journal volume & issue
Vol. 21, no. 6
pp. 6289 – 6335

Abstract

Read online

Hyperparameter optimization (HPO) has been well-developed and evolved into a well-established research topic over the decades. With the success and wide application of deep learning, HPO has garnered increased attention, particularly within the realm of machine learning model training and inference. The primary objective is to mitigate the challenges associated with manual hyperparameter tuning, which can be ad-hoc, reliant on human expertise, and consequently hinders reproducibility while inflating deployment costs. Recognizing the growing significance of HPO, this paper surveyed classical HPO methods, approaches for accelerating the optimization process, HPO in an online setting (dynamic algorithm configuration, DAC), and when there is more than one objective to optimize (multi-objective HPO). Acceleration strategies were categorized into multi-fidelity, bandit-based, and early stopping; DAC algorithms encompassed gradient-based, population-based, and reinforcement learning-based methods; multi-objective HPO can be approached via scalarization, metaheuristics, and model-based algorithms tailored for multi-objective situation. A tabulated overview of popular frameworks and tools for HPO was provided, catering to the interests of practitioners.

Keywords