Jisuanji kexue yu tansuo (Sep 2022)

Particle Swarm Optimization Combined with Q-learning of Experience Sharing Strategy

  • LUO Yixuan, LIU Jianhua, HU Renyuan, ZHANG Dongyang, BU Guannan

DOI
https://doi.org/10.3778/j.issn.1673-9418.2102070
Journal volume & issue
Vol. 16, no. 9
pp. 2151 – 2162

Abstract

Read online

Particle swarm optimization (PSO) has shortcomings such as easy to fall into local optimum, insufficient diversity and low precision. Recently, adopting the strategy of combining the reinforcement learning method like Q-learning to improve the PSO algorithm has become a new idea. However, this method has been proven to suffer the insufficient objectiveness of parameter selection and the limited strategy is not capable of coping with various situations. This paper proposes a Q-learning PSO with experience sharing (QLPSOES). The algorithm combines the PSO algorithm with the reinforcement learning method to construct a Q-table for each particle for dynamic selection of particle parameter settings. At the same time, an experience sharing strategy is designed, in which the particles share the “behavior experience” of the optimal particle through the Q-table. This method can accelerate the convergence of Q-table, enhance the learning ability between particles, and balance the global and local search ability of the algorithm. In addition, this paper uses orthogonal analysis experiments to find reinforcement learning methods for the selection of state, action parameters and reward functions in the PSO algorithm. The experiment is tested on the CEC2013 test function. The results show that the convergence speed and convergence accuracy of the QLPSOES algorithm are significantly improved compared with other algorithms, which verifies that the algorithm has better performance.

Keywords