International Journal of Automotive Engineering (Jan 2024)

Comparison of Reinforcement Learning and Model Predictive Control for Automated Generation of Optimal Control for Dynamic Systems within a Design Space Exploration Framework

  • Patrick Hoffmann,
  • Kirill Gorelik,
  • Valentin Ivanov

DOI
https://doi.org/10.20485/jsaeijae.15.1_19
Journal volume & issue
Vol. 15, no. 1
pp. 19 – 26

Abstract

Read online

This work provides a study of methods for the automated derivation of control strategies for over-actuated systems. For this purpose, Reinforcement Learning (RL) and Model Predictive Control (MPC) approximating the solution of the Optimal Control Problem (OCP) are compared using the example of an over-actuated vehicle model executing an ISO Double Lane Change (DLC). This exemplary driving maneuver is chosen due to its critical vehicle dynamics for the comparison of algorithms in terms of control performance and possible automation within a design space exploration framework. The algorithms show reasonable control results for the goal of this study, although there are differences in terms of driving stability. While Model Predictive Control first requires the optimization of the trajectory, which should then be optimally tracked, RL may combine both in one step. In addition, manual effort required to adapt the OCP problem to new design variants for solving it with RL and MPC is evaluated and assessed with respect to its automation. As a result of this study, an Actor-Critic Reinforcement Learning method is recommended for the automated derivation of control strategies in the context of a design space exploration.