Gradient-domain rendering methods can render higher-quality images at the same time cost compared with traditional ray tracing rendering methods, and, combined with the neural network, achieve better rendering quality than conventional screened Poisson reconstruction. However, it is still challenging for these methods to keep detailed information, especially in areas with complex indirect illumination and shadows. We propose an unsupervised reconstruction method that separates the direct rendering from the indirect, and feeds them into our unsupervised network with some corresponding auxiliary channels as two separated tasks. In addition, we introduce attention modules into our network which can further improve details. We finally combine the results of the direct and indirect illumination tasks to form the rendering results. Experiments show that our method significantly improves image quality details, especially in scenes with complex conditions.
- Article type
- Year
- Co-author

Global illumination (GI) plays a crucial role in rendering realistic results for virtual exhibitions, such as virtual car exhibitions. These scenarios usually include all-frequency bidirectional reflectance distribution functions (BRDFs), although their geometries and light configurations may be static. Rendering all-frequency BRDFs in real time remains challenging due to the complex light transport. Existing approaches, including precomputed radiance transfer, light probes, and the most recent path-tracing-based approaches (ReSTIR PT), cannot satisfy both quality and performance requirements simultaneously. Herein, we propose a practical hybrid global illumination approach that combines ray tracing and cached GI by caching the incoming radiance with wavelets. Our approach can produce results close to those of offline renderers at the cost of only approximately 17 ms at runtime and is robust over all-frequency BRDFs. Our approach is designed for applications involving static lighting and geometries, such as virtual exhibitions.
The glinty details from complex microstructures significantly enhance rendering realism. However, the previous methods use high-resolution normal maps to define each micro-geometry, which requires huge memory overhead. This paper observes that many self-similarity materials have independent structural characteristics, which we define as tiny example microstructures. We propose a procedural model to represent microstructures implicitly by performing spatial transformations and spatial distribution on tiny examples. Furthermore, we precompute normal distribution functions (NDFs) by 4D Gaussians for tiny examples and store them in multi-scale NDF maps. Combined with a tiny example based NDF evaluation method, complex glinty surfaces can be rendered simply by texture sampling. The experimental results show that our tiny example based the microstructure rendering method is GPU-friendly, successfully reproducing high-frequency reflection features of different microstructures in real time with low memory and computational overhead.

The interaction between light and materials is key to physically-based realistic rendering. However, it is also complex to analyze, especially when the materials contain a large number of details and thus exhibit "glinty" visual effects. Recent methods of producing glinty appearance are expected to be important in next-generation computer graphics. We provide here a comprehensive survey on recent glinty appearance rendering. We start with a definition of glinty appearance based on microfacet theory, and then summarize research works in terms of representation and practical rendering. We have implemented typical methods using our unified platform and compare them in terms of visual effects, rendering speed, and memory consumption. Finally, we briefly discuss limitations and future research directions. We hope our analysis, implementations, and comparisons will provide insight for readers hoping to choose suitable methods for applications, or carry out research.
Stochastic progressive photon mapping (SPPM) is one of the important global illumination methods in computer graphics. It can simulate caustics and specular-diffuse-specular lighting effects efficiently. However, as a biased method, it always suffers from both bias and variance with limited iterations, and the bias and the variance bring multi-scale noises into SPPM renderings. Recent learning-based methods have shown great advantages on denoising unbiased Monte Carlo (MC) methods, but have not been leveraged for biased ones. In this paper, we present the first learning-based method specially designed for denoising-biased SPPM renderings. Firstly, to avoid conflicting denoising constraints, the radiance of final images is decomposed into two components: caustic and global. These two components are then denoised separately via a two-network framework. In each network, we employ a novel multi-residual block with two sizes of filters, which significantly improves the model’s capabilities, and makes it more suitable for multi-scale noises on both low-frequency and high-frequency areas. We also present a series of photon-related auxiliary features, to better handle noises while preserving illumination details, especially caustics. Compared with other state-of-the-art learning-based denoising methods that we apply to this problem, our method shows a higher denoising quality, which could efficiently denoise multi-scale noises while keeping sharp illuminations.

Monte Carlo based methods such as path tracing are widely used in movie production. To achieve low noise, they require many samples per pixel, resulting in long rendering time. To reduce the cost, one solution is Monte Carlo denoising, which renders the image with fewer samples per pixel (as little as 128) and then denoises the resulting image. Many Monte Carlo denoising methods rely on deep learning: they use convolutional neural networks to learn the relationship between noisy images and reference images, using auxiliary features such as position and normal together with image color as inputs. The network predicts kernels which are then applied to the noisy input. These methods show powerful denoising ability, but tend to lose geometric or lighting details and to blur sharp features during denoising.

Temporal coherence is one of the central challenges for rendering a stylized line. It is especially difficult for stylized contours of coarse meshes or non-uniformly sampled models, because those contours are polygonal feature edges on the models with no continuous correspondences between frames. We describe a novel and simple technique for constructing a 2D brush path along a 3D contour. We also introduce a 3D parameter propagation and re-parameterization procedure to construct stroke paths along the 2D brush path to draw coherently stylized feature lines with a wide range of styles. Our method runs in real-time for coarse or non-uniformly sampled models, making it suitable for interactive applications needing temporal coherence.