IEEE Access (Jan 2024)

Evaluating Task Optimization and Reinforcement Learning Models in Robotic Task Parameterization

  • Michele Delledonne,
  • Enrico Villagrossi,
  • Manuel Beschi,
  • Alireza Rastegarpanah

DOI
https://doi.org/10.1109/ACCESS.2024.3504354
Journal volume & issue
Vol. 12
pp. 173734 – 173748

Abstract

Read online

The rapid evolution of industrial robot hardware has created a technological gap with software, limiting its adoption. The software solutions proposed in recent years have yet to meet the industrial sector’s requirements, as they focus more on the definition of task structure than the definition and tuning of its execution parameters. A framework for task parameter optimization was developed to address this gap. It breaks down the task using a modular structure, allowing the task optimization piece by piece. The optimization is performed with a dedicated hill-climbing algorithm. This paper revisits the framework by proposing an alternative approach that replaces the algorithmic component with reinforcement learning (RL) models. Five RL models are proposed with increasing complexity and efficiency. A comparative analysis of the traditional algorithm and RL models is presented, highlighting efficiency, flexibility, and usability. The results demonstrate that although RL models improve task optimization efficiency by 95%, they still need more flexibility. However, the nature of these models provides significant opportunities for future advancements.

Keywords