Applied Sciences (Feb 2020)

Semantic 3D Reconstruction for Robotic Manipulators with an Eye-In-Hand Vision System

  • Fusheng Zha,
  • Yu Fu,
  • Pengfei Wang,
  • Wei Guo,
  • Mantian Li,
  • Xin Wang,
  • Hegao Cai

DOI
https://doi.org/10.3390/app10031183
Journal volume & issue
Vol. 10, no. 3
p. 1183

Abstract

Read online

Three-dimensional reconstruction and semantic understandings have attracted extensive attention in recent years. However, current reconstruction techniques mainly target large-scale scenes, such as an indoor environment or automatic self-driving cars. There are few studies on small-scale and high-precision scene reconstruction for manipulator operation, which plays an essential role in the decision-making and intelligent control system. In this paper, a group of images captured from an eye-in-hand vision system carried on a robotic manipulator are segmented by deep learning and geometric features and create a semantic 3D reconstruction using a map stitching method. The results demonstrate that the quality of segmented images and the precision of semantic 3D reconstruction are effectively improved by our method.

Keywords