Results in Control and Optimization (Sep 2024)
Improving derivative-free optimization algorithms through an adaptive sampling procedure
Abstract
Black-box optimization plays a pivotal role in addressing complex real-world problems where the underlying mathematical model is unknown or expensive to evaluate. In this context, this work presents a method to enhance the performance of derivative-free optimization algorithms by integrating an adaptive sampling process. The proposed methodology aims to overcome the limitations of traditional methods by intelligently guiding the search towards promising regions of the search space. To achieve this, we utilize machine learning models, which effectively substitute first principles models. Furthermore, we employ the error maximization approach to steer the exploration towards areas where the surrogate model deviates significantly from the true model. Moreover, we involve a heuristic method, an adaptive sampling procedure, that repeats calls to a widely-used derivative-free optimization algorithm, SNOBFIT, allowing for the creation of new and improved surrogate models. To evaluate the efficiency of the proposed method, we conduct a comparative analysis across a benchmark set of 776 continuous problems. Our findings indicate that our approach successfully solved 93% of the problems. Notably, for larger problems, our method outperformed the standard SNOBFIT algorithm by achieving a 19% increase in problem-solving rate, and when, we introduced an additional termination criterion to enhance computational efficiency, the proposed method achieved a 31% time reduction compared to SNOBFIT.