IEEE Access (Jan 2024)

Reinforcement Learning for Autonomous Agents: Scene-Specific Dynamic Obstacle Avoidance and Target Pursuit in Unknown Environments

  • Zixiang Tang,
  • Fa Fu,
  • Gaoshang Lu,
  • Da Chen

DOI
https://doi.org/10.1109/ACCESS.2024.3463732
Journal volume & issue
Vol. 12
pp. 145496 – 145510

Abstract

Read online

This research presents a novel approach to train autonomous agents in complex and unknown environments, focusing on scene-specific learning, dynamic obstacle avoidance, and target tracking. Traditional reinforcement learning (RL) methods often suffer from high time complexity and inefficiency, which hinder agents’ ability to learn complex behaviors and understand their interconnections. This limitation creates significant challenges in environments requiring rapid adaptation and multifaceted responses. To address these issues, we propose a scene-specific learning framework that decomposes complex scenes into sub-scenes, enabling targeted training and the acquisition of distinct behaviors linked to various models. In intricate scenarios, observations are transformed into specific signals fed into a state machine, which then invokes the appropriate model to generate the required actions. Firstly, our experiments demonstrate that this approach achieves a 70% faster convergence rate compared to direct reinforcement learning. Secondly, it significantly reduces training time complexity. Thirdly, this structured framework enhances learning efficiency and, lastly, provides a scalable solution for sophisticated multi-task learning in autonomous systems. This approach effectively addresses complex reinforcement learning challenges.

Keywords