IEEE Access (Jan 2020)

Accelerating Robot Trajectory Learning for Stochastic Tasks

  • Josip Vidakovic,
  • Bojan Jerbic,
  • Bojan Sekoranja,
  • Marko Svaco,
  • Filip Suligoj

DOI
https://doi.org/10.1109/ACCESS.2020.2986999
Journal volume & issue
Vol. 8
pp. 71993 – 72006

Abstract

Read online

Learning from demonstration provides ways to transfer knowledge and skills from humans to robots. Models based solely on learning from demonstration often have very good generalization capabilities but are not completely accurate when adapting to new scenarios. This happens especially when learning stochastic tasks because of the correspondence problem and unmodeled physical properties of tasks. On the other hand, reinforcement learning (RL) methods such as policy search have the capability to refine an initial skill through exploration, where the learning process is often very dependent on the initialization strategy and is efficient in finding only local solutions. These two approaches are, therefore, frequently combined. In this paper, we present how the iterative learning of tasks can be accelerated by a learning from demonstration (LfD) method based on the extraction of via-points. The paper provides an evaluation of the approach on two different primitive motion tasks.

Keywords