Intelligent Computing (Jan 2025)

Action-Curiosity-Based Deep Reinforcement Learning Algorithm for Path Planning in a Nondeterministic Environment

  • Junxiao Xue,
  • Jinpu Chen,
  • Shiwen Zhang

DOI
https://doi.org/10.34133/icomputing.0140
Journal volume & issue
Vol. 4

Abstract

Read online

In the field of path planning, the efficiency and effectiveness of deep reinforcement learning (DRL) methods are often constrained by the algorithms’ exploration capabilities, particularly in dynamic and nondeterministic environments. This paper introduces a novel DRL optimization approach predicated on an action curiosity mechanism, designed to enhance both performance and efficiency in uncertain settings. By incentivizing agents to explore their surroundings more effectively, the action curiosity module amplifies learning efficiency and curtails training duration. The method’s adaptability and stability in intricate or dynamic scenarios are augmented through a dynamically adjusted reward mechanism. To mitigate the issue of strategy degradation stemming from excessive exploration, we incorporate a cosine annealing strategy that finesses parameter adjustments in real time. Extensive experimentation reveals that our enhanced algorithm outperforms conventional methods markedly in terms of success rate and average reward, among other metrics. These experimental outcomes corroborate the proposed method’s robustness and efficacy, laying a solid groundwork for efficient adaptive autonomous path planning within complex and nondeterministic environments.