Gong-kuang zidonghua (Apr 2024)

Autonomous pose estimation of underground disaster rescue drones based on visual and laser fusion

  • HE Yijing,
  • YANG Wei

DOI
https://doi.org/10.13272/j.issn.1671-251x.2023080124
Journal volume & issue
Vol. 50, no. 4
pp. 94 – 102

Abstract

Read online

The autonomous navigation capability of drones in post disaster mines is a prerequisite for their capability to perform rescue and disaster relief tasks. The autonomous pose estimation technology in unknown three-dimensional space is one of the key technologies for autonomous navigation of drones. At present, vision based pose estimation algorithms are prone to blurred scale and poor positioning performance due to the inability of monocular cameras to directly obtain depth information in three-dimensional space and the susceptibility to underground dim light. However, laser based pose estimation algorithms are prone to errors due to the small viewing angle, uneven scanning patterns, and constraints on the structural characteristics of mining scenes caused by LiDAR. In order to solve the above problems, an autonomous pose estimation algorithm of underground disaster rescue drones based on visual and laser fusion is proposed. Firstly, the monocular camera and LiDAR carried by the underground drone are used to obtain the image data and laser point cloud data of the mine. The ORB feature points are uniformly extracted from each frame of the mine image data. The depth information of the laser point cloud is used to recover the ORB feature points. The visual based drone pose estimation is achieved through inter frame matching of the feature points. Secondly, feature corner points and feature plane points are extracted from each frame of underground laser point cloud data, and laser based drone pose estimation is achieved through inter frame matching of feature points. Thirdly, the visual matching error function and the laser matching error function are placed under the same pose optimization function, and the pose of the underground drone is estimated based on vision and laser fusion. Finally, historical frame data is introduced through visual sliding windows and laser local maps to construct an error function between the historical frame data and the latest estimated pose. The optimization and correction of the drone pose under local constraints are completed through nonlinear optimization of the error function, avoiding the accumulation of estimated pose errors that may lead to trajectory deviation of the drone. The simulation experiments that simulating the complex environment after a mine disaster are conducted. The results show that the average relative translation error and relative rotation error of the pose estimation algorithm based on visual and laser fusion are 0.001 1 m and 0.000 8°, respectively. The average processing time of one frame of data is less than 100 ms. The algorithm does not experience trajectory drift during long-term operation underground. Compared to pose estimation algorithms based solely on vision or laser, the accuracy and stability of this fusion algorithm have been improved, and the real-time performance meets the requirements.

Keywords