IEEE Access (Jan 2024)

ProNeRF: Learning Efficient Projection-Aware Ray Sampling for Fine-Grained Implicit Neural Radiance Fields

  • Juan Luis Gonzalez Bello,
  • Minh-Quan Viet Bui,
  • Munchurl Kim

DOI
https://doi.org/10.1109/ACCESS.2024.3390753
Journal volume & issue
Vol. 12
pp. 56799 – 56814

Abstract

Read online

Recent advances in neural rendering have shown that although computationally expensive and slow for training, implicit compact models can accurately learn a scene’s geometries and view-dependent appearances from multiple views. To maintain such a small memory footprint but achieve faster inference times, recent works have adopted ‘sampler’ networks that adaptively sample a small subset of points along each ray in the implicit neural radiance fields (NeRF), effectively reducing the number of network forward passes to render a ray color. Although these methods achieve up to a $10\times $ reduction in rendering time, they still suffer from considerable quality degradation compared to vanilla NeRF. In contrast, we propose a new projection-aware neural radiance field model, referred to as ProNeRF, which provides an optimal trade-off between the memory footprint (similar to NeRF), speed (faster than HyperReel), and quality (better than ${K}$ -Planes). ProNeRF is equipped with a novel projection-aware sampling (PAS) network together with a new training strategy for ray exploration and exploitation, allowing for efficient fine-grained particle sampling. Our exploration and exploitation training strategy allows ProNeRF to learn the color and density distributions of full scenes, while also learning efficient ray sampling focused on the highest-density regions. ProNeRF yields state-of-the-art metrics, being 15 to $23\times $ faster with 0.65dB higher PSNR than the vanilla NeRF and showing 0.95dB higher PSNR performance compared to the best published sampler-based method, HyperReel. We provide extensive experimental results that support the effectiveness of our method on the widely adopted forward-facing and 360 datasets, LLFF and Blender, respectively. Additionally, we present real-world applications of ProNeRF on hand-held captured scenes. Our project page is publicly available at https://kaist-viclab.github.io/pronerf-site.

Keywords