IEEE Access (Jan 2021)

Multi-Armed Bandit Regularized Expected Improvement for Efficient Global Optimization of Expensive Computer Experiments With Low Noise

  • Rajitha Meka,
  • Adel Alaeddini,
  • Chinonso Ovuegbe,
  • Pranav A. Bhounsule,
  • Peyman Najafirad,
  • Kai Yang

DOI
https://doi.org/10.1109/ACCESS.2021.3095755
Journal volume & issue
Vol. 9
pp. 100125 – 100140

Abstract

Read online

Computer experiments are widely used to mimic expensive physical processes as black-box functions. A typical challenge of expensive computer experiments is to find the set of inputs that produce the desired response. This study proposes a multi-armed bandit regularized expected improvement (BREI) method to adaptively adjust the balance between exploration and exploitation for efficient global optimization of long-running computer experiments with low noise. The BREI adds a stochastic regularization term to the objective function of the expected improvement to integrate the information of additional exploration and exploitation into the optimization process. The proposed study also develops a multi-armed bandit strategy based on Thompson sampling for adaptive optimization of the tuning parameter of the BREI based on the preexisting and newly tested points. The performance of the proposed method is validated against some of the existing methods in the literature under different levels of noise using a case study on optimization of the collision avoidance algorithm in mobile robot motion planning as well as extensive simulation studies.

Keywords