Symmetry (Nov 2021)

Research on Fast Target Positioning Method of Self-Calibration Manipulator

  • Xuhui Ye,
  • Yuxuan Tang,
  • Xinyu Hu,
  • Daode Zhang,
  • Qi Chen

DOI
https://doi.org/10.3390/sym13112135
Journal volume & issue
Vol. 13, no. 11
p. 2135

Abstract

Read online

Hand-eye calibration and three-dimensional space target positioning are the keys to realize the automatic grasping of the manipulator. Aiming at the problems of a difficult camera manipulator calibration and poor real-time segmentation and positioning of stacked targets in industrial environment, a rapid target positioning method of self-calibration manipulator is proposed. Firstly, the spatial spherical autonomous path planning is carried out based on the quaternion linear interpolation method to calculate spatially symmetric path trajectory of the calibrator. The RGB-D camera mounted at the end of the manipulator is used to obtain multiple groups of RGB and depth images of the calibration plate. Combined with the position and attitude of the manipulator end, the internal and external parameters of the camera and the hand-eye conversion matrix are automatically calibrated. Then, based on KD tree algorithm, the holes in the point cloud are extracted to plan the shooting pose of the complementary image, and the target object is photographed from multiple symmetric angles. Combined with the iterative shooting pose of the manipulator, the rapid registration of the point cloud is realized and the complete outer surface model of the target object is obtained. Finally, the improved double pyramid feature fusion depth image is used to segment the RGB image through Mask R-CNN and mapped to the point cloud space, to achieve fast target segmentation of end-to-end 3D point cloud. The experimental results show that the eye-in-hand manipulator system can be self-calibrated to greatly simplify the calibration process and achieve the calibration accuracy of the traditional calibration method. The average error in each direction of the calibration result is less than 2 mm, which can achieve the acquisition accuracy of the vision sensor. It can also register and reconstruct point clouds for complex scenes in 1 s. The improved Mask-RCNN increases the segmentation accuracy of stacking objects by 8%. Compared with the physical error of hardware, the positioning error is no more than 0.89% and can meet the requirements of practical applications.

Keywords