Scientific Reports (Nov 2024)

Analysis of impact of limb segment length variations during reinforcement learning in four-legged robot

  • Arkadiusz Kubacki,
  • Marcin Adamek,
  • Piotr Baran

DOI
https://doi.org/10.1038/s41598-024-79333-y
Journal volume & issue
Vol. 14, no. 1
pp. 1 – 15

Abstract

Read online

Abstract Crawling robots are becoming increasingly prevalent in both industrial and private applications. Despite their many advantages over other robot types, they have complex movement mechanics. Artificial intelligence can simplify this by reinforcement learning. This process requires configuring the training environment and defining input parameters, including a robot model for movement training. To translate the virtual results into real-world scenarios, a 3D model with appropriate mechanical parameters must be developed.These parameters can vary significantly between multiple mechanical configurations, which will further impact the reinforcement learning process of such a robot. For this reason, it was decided to test which limb configurations would work best in this process. Initially, various kinematic types of walking robots were analysed, drawing on the anatomy of mammals, reptiles, and insects for the biological model. The reptilian model was chosen for its balance of stability, dynamics, and energy efficiency. The article reviews the preparation of robot models and the configuration of the Unity3D development environment using the ML-Agents toolkit. The experiment examined how different limb lengths affect training, resulting in movement algorithms for various quadruped robot configurations using artificial neural networks. Based on the numerical results, the best configuration was the default, with the same length of the tibia as the thigh, achieving a reward function value of 883.9 and an episode length of 245.5. Taking into account the same criteria, the least efficient configuration was definitely the one characterised by the shortest thigh and the longest tibia among those considered. In its case, the reward function reached a value of only 526.2 with an episode lasting 999.0, which means that it never achieved the intended goal.