Applied Sciences (Jul 2022)

An Enhanced Proximal Policy Optimization-Based Reinforcement Learning Method with Random Forest for Hyperparameter Optimization

  • Zhixin Ma,
  • Shengmin Cui,
  • Inwhee Joe

DOI
https://doi.org/10.3390/app12147006
Journal volume & issue
Vol. 12, no. 14
p. 7006

Abstract

Read online

For most machine learning and deep learning models, the selection of hyperparameters has a significant impact on the performance of the model. Therefore, deep learning and data analysis experts have to spend a lot of time on hyperparameter tuning when building a model for accomplishing a task. Although there are many algorithms used to solve hyperparameter optimization (HPO), these methods require the results of the actual trials at each epoch to help perform the search. To reduce the number of trials, model-based reinforcement learning adopts multilayer perceptron (MLP) to capture the relationship between hyperparameter settings and model performance. However, MLP needs to be carefully designed because there is a risk of overfitting. Thus, we propose a random forest-enhanced proximal policy optimization (RFEPPO) reinforcement learning algorithm to solve the HPO problem. In addition, reinforcement learning as a solution to HPO will encounter the sparse reward problem, eventually leading to slow convergence. To address this problem, we employ the intrinsic reward, which introduces the prediction error as the reward signal. Experiments carried on nine tabular datasets and two image classification datasets demonstrate the effectiveness of our model.

Keywords