Neural rendering provides a fundamentally new way to render photorealistic images. Similar to traditional light-baking methods, neural rendering utilizes neural networks to bake representations of scenes, materials, and lights into latent vectors learned from path-tracing ground truths. However, existing neural rendering algorithms typically use G-buffers to provide position, normal, and texture information about scenes. These are prone to occlusion by transparent surfaces, leading to distortion and loss of detail in rendered images. To address this limitation, we propose a novel neural rendering pipeline that accurately renders the scene behind transparent surfaces with global illumination and variable scenes. Our method separates the G-buffers for opaque and transparent objects, retaining G-buffer information behind transparent objects. Additionally, to render transparent objects with permutation invariance, we have designed a new permutation-invariant neural blending function. We have integrated our algorithm into an efficient custom renderer, achieving real-time performance. Our results show that our method is capable of rendering photorealistic images for variable scenes and viewpoints, accurately capturing complex transparent structures along with global illumination. Our renderer can achieve real-time performance (256 × 256 at 63 frames/s and 512 × 512 at 32 frames/s) for scenes with multiple variable transparent objects.
Publications
- Article type
- Year
- Co-author
Article type
Year
Open Access
Research Article
Issue
Computational Visual Media 2026, 12(2): 321-335
Published: 20 March 2026
Downloads:25
Total 1
京公网安备11010802044758号