The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (Aug 2020)

DENSE 3D OBJECT RECONSTRUCTION USING STRUCTURED-LIGHT SCANNER AND DEEP LEARNING

  • V. V. Kniaz,
  • V. V. Kniaz,
  • V. A. Mizginov,
  • L. V. Grodzitkiy,
  • N. A. Fomin,
  • V. A. Knyaz,
  • V. A. Knyaz

DOI
https://doi.org/10.5194/isprs-archives-XLIII-B2-2020-777-2020
Journal volume & issue
Vol. XLIII-B2-2020
pp. 777 – 783

Abstract

Read online

Structured light scanners are intensively exploited in various applications such as non-destructive quality control at an assembly line, optical metrology, and cultural heritage documentation. While more than 20 companies develop commercially available structured light scanners, structured light technology accuracy has limitations for fast systems. Model surface discrepancies often present if the texture of the object has severe changes in brightness or reflective properties of its texture. The primary source of such discrepancies is errors in the stereo matching caused by complex surface texture. These errors result in ridge-like structures on the surface of the reconstructed 3D model. This paper is focused on the development of a deep neural network LineMatchGAN for error reduction in 3D models produced by a structured light scanner. We use the pix2pix model as a starting point for our research. The aim of our LineMatchGAN is a refinement of the rough optical flow A and generation of an error-free optical flow B̂. We collected a dataset (which we term ZebraScan) consisting of 500 samples to train our LineMatchGAN model. Each sample includes image sequences (Sl, Sr), ground-truth optical flow B and a ground-truth 3D model. We evaluate our LineMatchGAN on a test split of our ZebraScan dataset that includes 50 samples. The evaluation proves that our LineMatchGAN improves the stereo matching accuracy (optical flow end point error, EPE) from 0.05 pixels to 0.01 pixels.