Sort:
Open Access Research Article Issue
An attention-embedded GAN for SVBRDF recovery from a single image
Computational Visual Media 2023, 9 (3): 551-561
Published: 22 March 2023
Downloads:54

Learning-based approaches have made sub-stantial progress in capturing spatially-varying bidi-rectional reflectance distribution functions (SVBRDFs) from a single image with unknown lighting and geometry. However, most existing networks only con-sider per-pixel losses which limit their capability to recover local features such as smooth glossy regions. A few generative adversarial networks use multiple discriminators for different parameter maps, increasing network complexity. We present a novel end-to-end generative adversarial network (GAN) to recover appearance from a single picture of a nearly-flat surface lit by flash. We use a single unified adversarial frame-work for each parameter map. An attention module guides the network to focus on details of the maps. Furthermore, the SVBRDF map loss is combined to prevent paying excess attention to specular highlights. We demonstrate and evaluate our method on both public datasets and real data. Quantitative analysis and visual comparisons indicate that our method achieves better results than the state-of-the-art in most cases.

Open Access Research Article Issue
AOGAN: A generative adversarial network for screen space ambient occlusion
Computational Visual Media 2022, 8 (3): 483-494
Published: 06 January 2022
Downloads:41

Ambient occlusion (AO) is a widely-used real-time rendering technique which estimates light intensity on visible scene surfaces. Recently, a number of learning-based AO approaches have been proposed, which bring a new angle to solving screen space shading via a unified learning framework with competitive quality and speed. However, most such methods have high error for complex scenes or tend to ignore details. We propose an end-to-end generative adversarial network for the production of realistic AO, and explore the importance of perceptual loss in the generative model to AO accuracy. An attention mechanism is also described to improve the accuracy of details, whose effectiveness is demonstrated on a wide variety of scenes.

total 2