Advanced Intelligent Systems (Jan 2022)

Learning Assembly Tasks in a Few Minutes by Combining Impedance Control and Residual Recurrent Reinforcement Learning

  • Padmaja Kulkarni,
  • Jens Kober,
  • Robert Babuška,
  • Cosimo Della Santina

DOI
https://doi.org/10.1002/aisy.202100095
Journal volume & issue
Vol. 4, no. 1
pp. n/a – n/a

Abstract

Read online

Adapting to uncertainties is essential yet challenging for robots while conducting assembly tasks in real‐world scenarios. Reinforcement learning (RL) methods provide a promising solution for these cases. However, training robots with RL can be a data‐extensive, time‐consuming, and potentially unsafe process. In contrast, classical control strategies can have near‐optimal performance without training and be certifiably safe. However, this is achieved at the cost of assuming that the environment is known up to small uncertainties. Herein, an architecture aiming at getting the best out of the two worlds, by combining RL and classical strategies so that each one deals with the right portion of the assembly problem, is proposed. A time‐varying weighted sum combines a recurrent RL method with a nominal strategy. The output serves as the reference for a task space impedance controller. The proposed approach can learn to insert an object in a frame within a few minutes of real‐world training. A success rate of 94% in the presence of considerable uncertainties is observed. Furthermore, the approach is robust to changes in the experimental setup and task, even when no retrain is performed. For example, the same policy achieves a success rate of 85% when the object properties change.

Keywords