Sort:
Open Access Research Article Issue
NeuS-PIR: Learning relightable neural surface using pre-integrated rendering
Computational Visual Media 2025, 11(4): 727-744
Published: 01 October 2025
Abstract PDF (22.9 MB) Collect
Downloads:24

In this paper, we propose NeuS-PIR, a novel approach for learning relightable neural surfaces using pre-integrated rendering from multi-view image observations. Unlike traditional methods based on NeRFs or discrete mesh representations, our approach employs an implicit neural surface representation to reconstruct high-quality geometry. This representation enables the factorization of the radiance field into two components: a spatially varying material field and an all-frequency lighting model. By jointly optimizing this factorization with a differentiable pre-integrated rendering framework, and material encoding regularization, our method effectively addresses the ambiguity in geometry reconstruction, leading to improved disentanglement and refinement of scene properties. Furthermore, we introduce a technique to distill indirect illumination fields, capturing complex lighting effects such as inter-reflections. As a result, NeuS-PIR enables advanced applications like relighting, which can be seamlessly integrated into modern graphics engines. Extensive qualitative and quantitative experiments on both synthetic and real datasets demonstrate that NeuS-PIR outperforms existing methods across various tasks. Source code is available at https://github.com/Sheldonmao/NeuSPIR.

Open Access Research Article Issue
Central similarity consistency hashing for asymmetric image retrieval
Computational Visual Media 2024, 10(4): 725-740
Published: 17 August 2024
Abstract PDF (5.6 MB) Collect
Downloads:33

Asymmetric image retrieval methods have drawn much attention due to their effectiveness in resource-constrained scenarios. They try to learn two models in an asymmetric paradigm, i.e., a small model for the query side and a large model for the gallery. However, we empirically find that the mutual training scheme (learning with each other) will inevitably degrade the performance of the large gallery model, due to the negative effects exerted by the small query one. In this paper, we propose Central Similarity Consistency Hashing (CSCH), which simultaneously learns a small query model and a large gallery model in a mutually promoted manner, ensuring both high retrieval accuracy and efficiency on the query side. To achieve this, we first introduce heuristically generated hash centers as the common learning target for both two models. Instead of randomly assigning each hash center to its corresponding category, we introduce the Hungarian algorithm to optimally match each of them by aligning the Hamming similarity of hash centers to the semantic similarity of their classes. Furthermore, we introduce the instance-level consistency loss, which enables the explicit knowledge transfer from the gallery model to the query one, without the sacrifice of gallery performance. Guided by the unified learning of hash centers and the distilled knowledge from gallery model, the query model can be gradually aligned to the Hamming space of the gallery model in a decoupled manner. Extensive experiments demonstrate the superiority of our CSCH method compared with current state-of-the-art deep hashing methods. The open-source code is available at https://github.com/dubanx/CSCH.

Total 2