IEEE Access (Jan 2019)

Generative Adversarial Network-Based Method for Transforming Single RGB Image Into 3D Point Cloud

  • Phuong Minh Chu,
  • Yunsick Sung,
  • Kyungeun Cho

DOI
https://doi.org/10.1109/ACCESS.2018.2886213
Journal volume & issue
Vol. 7
pp. 1021 – 1029

Abstract

Read online

Three-dimensional (3D) point clouds are important for many applications, including object tracking and 3D scene reconstruction. Point clouds are usually obtained from laser scanners, but their high cost impedes the widespread adoption of this technology. We propose a method to generate the 3D point cloud corresponding to a single red–green–blue (RGB) image. The method retrieves high-quality 3D data from two-dimensional (2D) images captured by conventional cameras, which are generally less expensive. The proposed method comprises two stages. First, a generative adversarial network generates a depth image estimation from a single RGB image. Then, the 3D point cloud is calculated from the depth image. The estimation relies on the parameters of the depth camera employed to generate the training data. The experimental results verify that the proposed method provides high-quality 3D point clouds from single 2D images. Moreover, the method does not require a PC with outstanding computational resources, further reducing implementation costs, as only a moderate-capacity graphics processing unit can efficiently handle the calculations.

Keywords