IEEE Access (Jan 2025)

ADR-SALD: Attention-Based Deep Residual Sign Agnostic Learning With Derivatives for Implicit Surface Reconstruction

  • Abol Basher,
  • Jani Boutellier

DOI
https://doi.org/10.1109/access.2025.3549279
Journal volume & issue
Vol. 13
pp. 44243 – 44259

Abstract

Read online

Learning 3D shape directly from raw data (i.e., un-oriented meshes, raw point clouds or triangle soups) and reconstructing high fidelity surfaces are still a difficult problem in computer vision and graphics. Several approaches have been proposed to learn from raw data, however, their reconstruction quality is somewhat limited in capturing small detail. Moreover, they introduce surface sheet in case of big gaps and empty spaces, and struggle in reconstructing small openings and thin structure. In this study, we address these problems by proposing a novel attention-based variational autoencoder architecture, ADR-SALD where the encoder and decoder are constructed based on the idea of residual feature learning and inception-like neural structure. We have adopted two different self attention mechanisms for sign agnostic learning in the encoder, which allow the proposed approach to learn the global spatial contextual dependencies and local features simultaneously for the 3D shape. This novel architecture solves the surface sheet problem of previous approaches such as SALD. Moreover, our experimental results show that ADR-SALD is more successful in reconstructing thin structure than the state-of-the-art approaches SALD and DC-DFFN, and has significant performance in separating small gaps. The proposed approach outperforms the baseline state-of-the-art approaches by reconstruction quality and quantitative measures.

Keywords