IEEE Access (Jan 2021)

On Selection of a Benchmark by Determining the Algorithms’ Qualities

  • Iztok Fister,
  • Janez Brest,
  • Andres Iglesias,
  • Akemi Galvez,
  • Suash Deb,
  • Iztok Fister

DOI
https://doi.org/10.1109/ACCESS.2021.3058285
Journal volume & issue
Vol. 9
pp. 51166 – 51178

Abstract

Read online

The authors got the motivation for writing the article based on an issue, with which developers of the newly developed nature-inspired algorithms are usually confronted today: How to select the test benchmark such that it highlights the quality of the developed algorithm most fairly? In line with this, the CEC Competitions on Real-Parameter Single-Objective Optimization benchmarks that were issued several times in the last decade, serve as a testbed for evaluating the collection of nature-inspired algorithms selected in our study. Indeed, this article addresses two research questions: (1) How the selected benchmark affects the ranking of the particular algorithm, and (2) If it is possible to find the best algorithm capable of outperforming all the others on all the selected benchmarks. Ten outstanding algorithms (also winners of particular competitions) from different periods in the last decade were collected and applied to benchmarks issued during the same time period. A comparative analysis showed that there is a strong correlation between the rankings of the algorithms and the benchmarks used, although some deviations arose in ranking the best algorithms. The possible reasons for these deviations were exposed and commented on.

Keywords