AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (20.8 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Delving into high-quality SVBRDF acquisition: A new setup and method

School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China
Guangdong Shidi Intelligence Technology, Ltd., Guangzhou 510000, China
Show Author Information

Graphical Abstract

Abstract

In this study, we present a new and innovative framework for acquiring high-quality SVBRDF maps. Our approach addresses the limi-tations of the current methods and proposes a new solution. The core of our method is a simple hardware setup consisting of a consumer-level camera, LED lights, and a carefully designed network that can accurately obtain the high-quality SVBRDF properties of a nearly planar object. By capturing a flexible number of images of an object, our network uses different subnetworks to train different property maps and employs appropriate loss functions for each of them. To further enhance the quality of the maps, we improved the network structure by adding a novel skip connection that connects the encoder and decoder withglobal features. Through extensive experimentation using both synthetic and real-world materials, our results demonstrate that our method outperforms previous methods and produces superior results. Furthermore, our proposed setup can also be used to acquire physically based rendering maps of special materials.

References

[1]
Holroyd, M.; Lawrence, J.; Zickler, T. A coaxial optical scanner for synchronous acquisition of 3D geometry and surface reflectance. ACM Transactions on Graphics Vol. 29, No. 4, Article No. 99, 2010.
[2]
Dana, K. J.; van Ginneken, B.; Nayar, S. K.; Koenderink, J. J. Reflectance and texture of real-world surfaces. ACM Transactions on Graphics Vol. 18, No. 1, 134, 1999.
[3]
Lawrence, J.; Ben-Artzi, A.; DeCoro, C.; Matusik, W.; Pfister, H.; Ramamoorthi, R.; Rusinkiewicz, S. Inverse shade trees for non-parametric material representation and editing. ACM Transactions on Graphics Vol. 25, No. 3, 735745, 2006.
[4]
Li, Z. Q.; Sunkavalli, K.; Chandraker, M. Materials for masses: SVBRDF acquisition with a single mobile phone image. In: Computer Vision – ECCV 2018. Lecture Notes in Computer Science, Vol. 11207. Ferrari, V.; Hebert, M.; Sminchisescu, C.; Weiss, Y. Eds. Springer Cham, 7490, 2018.
[5]
Deschaintre, V.; Aittala, M.; Durand, F.; Drettakis, G.; Bousseau, A. Single-image SVBRDF capture with a rendering-aware deep network. ACM Transactions on Graphics Vol. 37, No. 4, Article No. 128, 2018.
[6]
Deschaintre, V.; Aittala, M.; Durand, F.; Drettakis, G.; Bousseau, A. Flexible SVBRDF capture with a multi-image deep network. Computer Graphics Forum Vol. 38, No. 4, 113, 2019.
[7]
Asselin, L. P.; Laurendeau, D.; Lalonde, J. F. Deep SVBRDF estimation on real materials. In: Proceedings of the International Conference on 3D Vision, 11571166, 2020.
[8]
Guo, J.; Lai, S. C.; Tao, C. Z.; Cai, Y. L.; Wang, L.; Guo, Y. W.; Yan, L. Q. Highlight-aware two-stream network for single-image SVBRDF acquisition. ACM Transactions on Graphics Vol. 40, No. 4, Article No. 123, 2021.
[9]
Zhao, Y. Z.; Wang, B. B.; Xu, Y. N.; Zeng, Z.; Wang, L.; Holzschuch, N. Joint SVBRDF recovery and synthesis from a single image using an unsupervised generative adversarial network. In: Proceedings of the Eurographics Symposium on Rendering, 5366, 2020.
[10]
Wen, T.; Wang, B. B.; Zhang, L.; Guo, J.; Holzschuch, N. SVBRDF recovery from a single image with highlights using a pre-trained generative adversarial network. Computer Graphics Forum Vol. 41, No. 6, 110123, 2022.
[11]
Kang, K. Z.; Xie, C. H.; He, C. G.; Yi, M. Q.; Gu, M. Y.; Chen, Z. M.; Zhou, K.; Wu, H. Z. Learning efficient illumination multiplexing for joint capture of reflectance and shape. ACM Transactions on Graphics Vol. 38, No. 6, Article No. 165, 2019.
[12]
Zhou, X. L.; Kalantari, N. K. Adversarial single-image SVBRDF estimation with hybrid training. Computer Graphics Forum Vol. 40, No. 2, 315325, 2021.
[13]
Guo, Y.; Smith, C.; Hasan, M.; Sunkavalli, K.; Zhao, S. MaterialGAN: Reflectance capture using a generative SVBRDF model. ACM Transactions on Graphics Vol. 39, No. 6, Article No. 254, 2020.
[14]
Gao, D.; Li, X.; Dong, Y.; Peers, P.; Xu, K.; Tong, X. Deep inverse rendering for high-resolution SVBRDF estimation from an arbitrary number of images. ACM Transactions on Graphics Vol. 38, No. 4, Article No. 134, 2019.
[15]
Ye, W. J.; Dong, Y.; Peers, P.; Guo, B. N. Deep reflectance scanning: Recovering spatially-varying material appearance from a flash-lit video sequence. Computer Graphics Forum Vol. 40, No. 6, 409427, 2021.
[16]
Albert, R. A.; Chan, D. Y.; Goldman, D. B.; O’Brien, J. F. Approximate svBRDF estimation from mobile phone video. In: Proceedings of the Eurographics Symposium on Rendering: Experimental Ideas & Implementations, 1122, 2018.
[17]
Baek, S. H.; Jeon, D. S.; Tong, X.; Kim, M. H. Simultaneous acquisition of polarimetric SVBRDF and normals. ACM Transactions on Graphics Vol. 37, No. 6, Article No. 268, 2018.
[18]
Deschaintre, V.; Lin, Y. M.; Ghosh, A. Deep polarization imaging for 3D shape and SVBRDF acquisition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1556215571, 2021.
[19]
Ma, X. H.; Kang, K. Z.; Zhu, R. S.; Wu, H. Z.; Zhou, K. Free-form scanning of non-planar appearance with neural trace photography. ACM Transactions on Graphics Vol. 40, No. 4, Article No. 124, 2021.
[20]
Tunwattanapong, B.; Fyffe, G.; Graham, P.; Busch, J.; Yu, X. M.; Ghosh, A.; Debevec, P. Acquiring reflectance and shape from continuous spherical harmonic illumination. ACM Transactions on Graphics Vol. 32, No. 4, Article No. 109, 2013.
[21]
Nam, G.; Lee, J. H.; Gutierrez, D.; Kim, M. H. Practical SVBRDF acquisition of 3D objects with unstructured flash photography. ACM Transactions on Graphics Vol. 37, No. 6, Article No. 267, 2018.
[22]
Xia, R.; Dong, Y.; Peers, P.; Tong, X. Recovering shape and spatially-varying surface reflectance under unknown illumination. ACM Transactions on Graphics Vol. 35, No. 6, Article No. 187, 2016.
[23]
Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. Lecture Notes in Computer Science, Vol. 9351. Navab, N.; Hornegger, J.; Wells, W.; Frangi, A. Eds. Springer Cham, 234241, 2015.
[24]
Zhou, Z. W.; Rahman Siddiquee, M. M.; Tajbakhsh, N.; Liang, J. M. UNet++: A nested U-net architecture for medical image segmentation. In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. DLMIA ML-CDS 2018. Lecture Notes in Computer Science, Vol. 11045. Stoyanov, D., et al. Eds. Springer Cham, 311, 2018.
[25]
Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In: Proceedings of the IEEE/CVF Con-ference on Computer Vision and Pattern Recognition, 71327141, 2018.
[26]
[27]
Cycles: Open source production rendering. 2022. Available at https://www.cycles-renderer.org/
[28]
Kingma, D. P.; Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[29]
Smith, A. R.; Blinn, J. F. Blue screen matting. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, 259268, 1996.
Computational Visual Media
Pages 523-541
Cite this article:
Xian C, Li J, Wu H, et al. Delving into high-quality SVBRDF acquisition: A new setup and method. Computational Visual Media, 2024, 10(3): 523-541. https://doi.org/10.1007/s41095-023-0352-6

222

Views

15

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 02 March 2023
Accepted: 20 April 2023
Published: 09 February 2024
© The Author(s) 2024.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Return