Applied Computational Intelligence and Soft Computing (Jan 2024)

Optimizing Artificial Neural Network Learning Using Improved Reinforcement Learning in Artificial Bee Colony Algorithm

  • Taninnuch Lamjiak,
  • Booncharoen Sirinaovakul,
  • Siriwan Kornthongnimit,
  • Jumpol Polvichai,
  • Aysha Sohail

DOI
https://doi.org/10.1155/2024/6357270
Journal volume & issue
Vol. 2024

Abstract

Read online

Artificial neural networks (ANNs) are widely used machine learning techniques with applications in various fields. Heuristic search optimization methods are typically used to minimize the loss function in ANNs. However, these methods can lead the network to become stuck in local optima, limiting performance. To overcome this challenge, this study introduces an improved optimization approach, the improvement of reinforcement learning in the artificial bee colony (improved R-ABC) algorithm, to enhance the optimization process for ANNs. The proposed method aims to overcome the limitations of heuristic search and improve the efficiency of weight adjustment in ANNs. This new approach enhances the discovery phase of the traditional R-ABC by including the parameters of neighboring food sources, augmenting the search capabilities for finding the optimal solution. The performance of the improved R-ABC was compared with ANNs utilizing backpropagation with stochastic gradient descent (SGD) and Adam optimizers, as well as other swarm intelligence (SI) methods such as particle swarm optimization (PSO) and traditional R-ABC. The results showed that both PSO and R-ABC continuously improved the solutions across all benchmark datasets. In the iris dataset, all SI approaches consistently achieved F1-scores exceeding 0.94, outperforming SGD and Adam. For the other datasets, the SI approach generally outperformed the other optimization methods. The results indicate that when the improved R-ABC is applied to ANNs, it outperforms heuristic search optimization, especially as the network size expands. Although SGD and Adam achieved faster execution times with TensorFlow, the study suggests that using PSO and improved R-ABC can improve model accuracy and efficiency. Advanced SI methods enhance the optimization process and increase the ability of ANNs to obtain optimal solutions. Enhanced R-ABC and PSO algorithms can significantly improve ANN training performance and efficiency, especially in complex and high-dimensional datasets.