IEEE Access (Jan 2022)

Safe Reinforcement Learning Using Wasserstein Distributionally Robust MPC and Chance Constraint

  • Arash Bahari Kordabad,
  • Rafael Wisniewski,
  • Sebastien Gros

DOI
https://doi.org/10.1109/ACCESS.2022.3228922
Journal volume & issue
Vol. 10
pp. 130058 – 130067

Abstract

Read online

In this paper, we address the chance-constrained safe Reinforcement Learning (RL) problem using the function approximators based on Stochastic Model Predictive Control (SMPC) and Distributionally Robust Model Predictive Control (DRMPC). We use Conditional Value at Risk (CVaR) to measure the probability of constraint violation and safety. In order to provide a safe policy by construction, we first propose using parameterized nonlinear DRMPC at each time step. DRMPC optimizes a finite-horizon cost function subject to the worst-case constraint violation in an ambiguity set. We use a statistical ball around the empirical distribution with a radius measured by the Wasserstein metric as the ambiguity set. Unlike the sample average approximation SMPC, DRMPC provides a probabilistic guarantee of the out-of-sample risk and requires lower samples from the disturbance. Then the Q-learning method is used to optimize the parameters in the DRMPC to achieve the best closed-loop performance. Wheeled Mobile Robot (WMR) path planning with obstacle avoidance will be considered to illustrate the efficiency of the proposed method.

Keywords