ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences (Aug 2020)

FUSED 3D TRANSPARENT VISUALIZATION FOR LARGE-SCALE CULTURAL HERITAGE USING DEEP LEARNING-BASED MONOCULAR RECONSTRUCTION

  • J. Pan,
  • L. Li,
  • H. Yamaguchi,
  • K. Hasegawa,
  • F. I. Thufail,
  • Brahmantara,
  • S. Tanaka

DOI
https://doi.org/10.5194/isprs-annals-V-2-2020-989-2020
Journal volume & issue
Vol. V-2-2020
pp. 989 – 996

Abstract

Read online

This paper proposes a fused 3D transparent visualization method with the aim of achieving see-through imaging of large-scale cultural heritage by combining photogrammetry point cloud data and 3D reconstructed models. 3D reconstructed models are efficiently reconstructed from a single monocular photo using deep learning. It is demonstrated that the proposed method can be widely applied, particularly to instances of incomplete cultural heritages. In this study, the proposed method is applied to a typical example, the Borobudur temple in Indonesia. The Borobudur temple possesses the most complete collection of Buddhist reliefs. However, some parts of the Borobudur reliefs have been hidden by stone walls and became not visible following the reinforcements during the Dutch rule. Today, only gray-scale monocular photos of those hidden parts are displayed in the Borobudur Museum. In this paper, the visible parts of the temple are first digitized into point cloud data by photogrammetry scanning. For the hidden parts, a 3D reconstruction method based on deep learning is proposed to reconstruct the invisible parts into point cloud data directly from single monocular photos from the museum. The proposed 3D reconstruction method achieves 95% accuracy of the reconstructed point cloud on average. With the point cloud data of both the visible parts and the hidden parts, the proposed transparent visualization method called the stochastic point-based rendering is applied to achieve a fused 3D transparent visualization of the valuable temple.