In this paper, we propose NeuS-PIR, a novel approach for learning relightable neural surfaces using pre-integrated rendering from multi-view image observations. Unlike traditional methods based on NeRFs or discrete mesh representations, our approach employs an implicit neural surface representation to reconstruct high-quality geometry. This representation enables the factorization of the radiance field into two components: a spatially varying material field and an all-frequency lighting model. By jointly optimizing this factorization with a differentiable pre-integrated rendering framework, and material encoding regularization, our method effectively addresses the ambiguity in geometry reconstruction, leading to improved disentanglement and refinement of scene properties. Furthermore, we introduce a technique to distill indirect illumination fields, capturing complex lighting effects such as inter-reflections. As a result, NeuS-PIR enables advanced applications like relighting, which can be seamlessly integrated into modern graphics engines. Extensive qualitative and quantitative experiments on both synthetic and real datasets demonstrate that NeuS-PIR outperforms existing methods across various tasks. Source code is available at https://github.com/Sheldonmao/NeuSPIR.
Publications
- Article type
- Year
- Co-author
Article type
Year
Open Access
Research Article
Issue
Computational Visual Media 2025, 11(4): 727-744
Published: 01 October 2025
Downloads:24
Total 1
京公网安备11010802044758号