IEEE Access (Jan 2024)

Benchmarking Real-World Many-Objective Problems: A Problem Suite With Baseline Results

  • Vikas Palakonda,
  • Jae-Mo Kang,
  • Heechul Jung

DOI
https://doi.org/10.1109/ACCESS.2024.3383916
Journal volume & issue
Vol. 12
pp. 49275 – 49290

Abstract

Read online

In recent decades, multi-objective evolutionary algorithms (MOEAs) have been evaluated on artificial test problems with unrealistic characteristics, leading to uncertain conclusions about their efficacy in real-world applications. To address this issue, a few benchmark test suites comprising real-world problems have been proposed for MOEAs, encompassing numerous multi-objective problems and a select few many-objective problems. Given the distinct challenges posed by many-objective optimization problems (MaOPs) and their inherent difficulty, developing a test suite that includes real-world problems with many conflicting objectives is crucial. Hence, in this paper, we propose a comprehensive test suite for benchmarking the many-objective real-world complex problems. This test suite consists of 11 problems collected from different disciplines of engineering. Furthermore, we comprehensively analyzed the problems in our newly proposed test suite, employing eight state-of-the-art algorithms rooted in various fundamental principles specifically designed to address MaOPs. The experimental findings highlight the strong performance of indicator-based, weight-vector-based decomposition, Pareto-dominance-based, and hybrid MOEAs on the proposed test suite. In contrast, reference-vector-based decomposition approaches, Pareto front shape estimation-based methods, and multi-evolution approaches exhibit relatively weaker performance.

Keywords