Sort:
Regular Paper Issue
Unsupervised Reconstruction for Gradient-Domain Rendering with Illumination Separation
Journal of Computer Science and Technology 2024, 39(6): 1281-1291
Published: 16 January 2025
Abstract Collect

Gradient-domain rendering methods can render higher-quality images at the same time cost compared with traditional ray tracing rendering methods, and, combined with the neural network, achieve better rendering quality than conventional screened Poisson reconstruction. However, it is still challenging for these methods to keep detailed information, especially in areas with complex indirect illumination and shadows. We propose an unsupervised reconstruction method that separates the direct rendering from the indirect, and feeds them into our unsupervised network with some corresponding auxiliary channels as two separated tasks. In addition, we introduce attention modules into our network which can further improve details. We finally combine the results of the direct and indirect illumination tasks to form the rendering results. Experiments show that our method significantly improves image quality details, especially in scenes with complex conditions.

Regular Paper Issue
A Tiny Example Based Procedural Model for Real-Time Glinty Appearance Rendering
Journal of Computer Science and Technology 2024, 39(4): 771-784
Published: 20 September 2024
Abstract Collect

The glinty details from complex microstructures significantly enhance rendering realism. However, the previous methods use high-resolution normal maps to define each micro-geometry, which requires huge memory overhead. This paper observes that many self-similarity materials have independent structural characteristics, which we define as tiny example microstructures. We propose a procedural model to represent microstructures implicitly by performing spatial transformations and spatial distribution on tiny examples. Furthermore, we precompute normal distribution functions (NDFs) by 4D Gaussians for tiny examples and store them in multi-scale NDF maps. Combined with a tiny example based NDF evaluation method, complex glinty surfaces can be rendered simply by texture sampling. The experimental results show that our tiny example based the microstructure rendering method is GPU-friendly, successfully reproducing high-frequency reflection features of different microstructures in real time with low memory and computational overhead.

Open Access Review Article Issue
Recent advances in glinty appearance rendering
Computational Visual Media 2022, 8(4): 535-552
Published: 16 May 2022
Abstract PDF (9.3 MB) Collect
Downloads:76

The interaction between light and materials is key to physically-based realistic rendering. However, it is also complex to analyze, especially when the materials contain a large number of details and thus exhibit "glinty" visual effects. Recent methods of producing glinty appearance are expected to be important in next-generation computer graphics. We provide here a comprehensive survey on recent glinty appearance rendering. We start with a definition of glinty appearance based on microfacet theory, and then summarize research works in terms of representation and practical rendering. We have implemented typical methods using our unified platform and compare them in terms of visual effects, rendering speed, and memory consumption. Finally, we briefly discuss limitations and future research directions. We hope our analysis, implementations, and comparisons will provide insight for readers hoping to choose suitable methods for applications, or carry out research.

Regular Paper Issue
Denoising Stochastic Progressive Photon Mapping Renderings Using a Multi-Residual Network
Journal of Computer Science and Technology 2020, 35(3): 506-521
Published: 29 May 2020
Abstract Collect

Stochastic progressive photon mapping (SPPM) is one of the important global illumination methods in computer graphics. It can simulate caustics and specular-diffuse-specular lighting effects efficiently. However, as a biased method, it always suffers from both bias and variance with limited iterations, and the bias and the variance bring multi-scale noises into SPPM renderings. Recent learning-based methods have shown great advantages on denoising unbiased Monte Carlo (MC) methods, but have not been leveraged for biased ones. In this paper, we present the first learning-based method specially designed for denoising-biased SPPM renderings. Firstly, to avoid conflicting denoising constraints, the radiance of final images is decomposed into two components: caustic and global. These two components are then denoised separately via a two-network framework. In each network, we employ a novel multi-residual block with two sizes of filters, which significantly improves the model’s capabilities, and makes it more suitable for multi-scale noises on both low-frequency and high-frequency areas. We also present a series of photon-related auxiliary features, to better handle noises while preserving illumination details, especially caustics. Compared with other state-of-the-art learning-based denoising methods that we apply to this problem, our method shows a higher denoising quality, which could efficiently denoise multi-scale noises while keeping sharp illuminations.

Total 4