AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (22.9 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

NeuS-PIR: Learning relightable neural surface using pre-integrated rendering

King Abdullah University of Science and Technology (KAUST), Thuwal 23955, Saudi Arabia
The Robotics and Autonomous Driving Lab (RAL), Baidu Research, Beijing 100085, China
Paul G. Allen School of Computer Science, University of Washington, Seattle 98195, WA, USA
Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100190, China
Show Author Information

Abstract

In this paper, we propose NeuS-PIR, a novel approach for learning relightable neural surfaces using pre-integrated rendering from multi-view image observations. Unlike traditional methods based on NeRFs or discrete mesh representations, our approach employs an implicit neural surface representation to reconstruct high-quality geometry. This representation enables the factorization of the radiance field into two components: a spatially varying material field and an all-frequency lighting model. By jointly optimizing this factorization with a differentiable pre-integrated rendering framework, and material encoding regularization, our method effectively addresses the ambiguity in geometry reconstruction, leading to improved disentanglement and refinement of scene properties. Furthermore, we introduce a technique to distill indirect illumination fields, capturing complex lighting effects such as inter-reflections. As a result, NeuS-PIR enables advanced applications like relighting, which can be seamlessly integrated into modern graphics engines. Extensive qualitative and quantitative experiments on both synthetic and real datasets demonstrate that NeuS-PIR outperforms existing methods across various tasks. Source code is available at https://github.com/Sheldonmao/NeuSPIR.

Graphical Abstract

References

【1】
【1】
 
 
Computational Visual Media
Pages 727-744

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Mao S, Wu C, Shen Z, et al. NeuS-PIR: Learning relightable neural surface using pre-integrated rendering. Computational Visual Media, 2025, 11(4): 727-744. https://doi.org/10.26599/CVM.2025.9450493

1000

Views

24

Downloads

1

Crossref

0

Web of Science

0

Scopus

0

CSCD

Received: 12 February 2025
Accepted: 06 May 2025
Published: 01 October 2025
© The Author(s) 2025.

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

To submit a manuscript, please go to https://jcvm.org.