SICE Journal of Control, Measurement, and System Integration (Dec 2024)

Motion planner based on CNN with LSTM through mediated perception for obstacle avoidance

  • Satoshi Hoshino,
  • Yu Kubota,
  • Yusuke Yoshida

DOI
https://doi.org/10.1080/18824889.2024.2307684
Journal volume & issue
Vol. 17, no. 1
pp. 19 – 30

Abstract

Read online

For autonomous navigation, a mobile robot is required to move toward a destination while avoiding obstacles. In this paper, we present a motion planner based on CNN. In terms of obstacle avoidance, since a position of a dynamic obstacle changes with time, it is important for the robot to plan avoidance motions in consideration of the time series variation in the images. For this purpose, an LSTM block is applied to the CNN. The policy of the motion planner represented by CNN with LSTM is trained through imitation learning. In this regard, however, it is difficult for the robot to recognize unknown objects as obstacles. For obstacle recognition, a perception process is further provided between the image inputs and CNN with LSTM in the motion planner. Moreover, the robot plans different avoidance motions depending on the velocity of the dynamic obstacle. For this purpose, an obstacle state classifier based on CNN is used ahead of the motion planner. A depth-difference image generated from two depth images is fed as the input to the classifier. A classified state that indicates the velocity of the obstacle is fed as the input to the following motion planner. In the navigation experiments, we show that the robot based on the proposed motion planner is able to move toward a destination autonomously while avoiding standing and walking persons, respectively. Furthermore, we show that the robot based on the motion planner with the obstacle state input is able to plan different avoidance motions for a person walking slowly or fast using the obstacle state classifier.

Keywords