PLoS ONE (Jan 2021)

Political optimizer with interpolation strategy for global optimization.

  • Aijun Zhu,
  • Zhanqi Gu,
  • Cong Hu,
  • Junhao Niu,
  • Chuanpei Xu,
  • Zhi Li

DOI
https://doi.org/10.1371/journal.pone.0251204
Journal volume & issue
Vol. 16, no. 5
p. e0251204

Abstract

Read online

Political optimizer (PO) is a relatively state-of-the-art meta-heuristic optimization technique for global optimization problems, as well as real-world engineering optimization, which mimics the multi-staged process of politics in human society. However, due to a greedy strategy during the election phase, and an inappropriate balance of global exploration and local exploitation during the party switching stage, it suffers from stagnation in local optima with a low convergence accuracy. To overcome such drawbacks, a sequence of novel PO variants were proposed by integrating PO with Quadratic Interpolation, Advance Quadratic Interpolation, Cubic Interpolation, Lagrange Interpolation, Newton Interpolation, and Refraction Learning (RL). The main contributions of this work are listed as follows. (1) The interpolation strategy was adopted to help the current global optima jump out of local optima. (2) Specifically, RL was integrated into PO to improve the diversity of the population. (3) To improve the ability of balancing exploration and exploitation during the party switching stage, a logistic model was proposed to maintain a good balance. To the best of our knowledge, PO combined with the interpolation strategy and RL was proposed here for the first time. The performance of the best PO variant was evaluated by 19 widely used benchmark functions and 30 test functions from the IEEE CEC 2014. Experimental results revealed the superior performance of the proposed algorithm in terms of exploration capacity.