Journal Home > Volume 7 , Issue 3

Light field rendering is an image-based rendering method that does not use 3D models but only images of the scene as input to render new views. Light field approximation, represented as a set of images, suffers from so-called refocusing artifacts due to different depth values of the pixels in the scene. Without information about depths in the scene, proper focusing of the light field scene is limited to a single focusing distance. The correct focusing method is addressed in this work and a real-time solution is proposed for focusing of light field scenes, based on statistical analysis of the pixel values contributing to the final image. Unlike existing techniques, this method does not need precomputed or acquired depth information. Memory requirements and streaming bandwidth are reduced and real-time rendering is possible even for high resolution light field data, yielding visually satisfactory results. Experimental evaluation of the proposed method, implemented on a GPU, is presented in this paper.


menu
Abstract
Full text
Outline
About this article

Real-time per-pixel focusing method for light field rendering

Show Author's information T. Chlubna1( )T. Milet1P. Zemčík1
Faculty of Information Technology, Brno University of Technology, 612 00 Brno, Czech Republic

Abstract

Light field rendering is an image-based rendering method that does not use 3D models but only images of the scene as input to render new views. Light field approximation, represented as a set of images, suffers from so-called refocusing artifacts due to different depth values of the pixels in the scene. Without information about depths in the scene, proper focusing of the light field scene is limited to a single focusing distance. The correct focusing method is addressed in this work and a real-time solution is proposed for focusing of light field scenes, based on statistical analysis of the pixel values contributing to the final image. Unlike existing techniques, this method does not need precomputed or acquired depth information. Memory requirements and streaming bandwidth are reduced and real-time rendering is possible even for high resolution light field data, yielding visually satisfactory results. Experimental evaluation of the proposed method, implemented on a GPU, is presented in this paper.

Keywords: light field, image-based rendering, plenoptic function, computational photography

References(42)

[1]
Alain, M.; Aenchbacher, W.; Smolic, A. Interactive light field tilt-shift refocus with generalized shift-and-sum. arXiv preprint arXiv:1910.04699, 2019.
[2]
Landy, M.; Movshon, J. A. The plenoptic function and the elements of early vision. In: Computational Models of Visual Processing. MIT Press, 3–20, 1991.
DOI
[3]
Levoy, M.; Hanrahan, P. Light field rendering. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, 31–42, 1996.
DOI
[4]
Gortler, S. J.; Grzeszczuk, R.; Szeliski, R.; Cohen, M. F. The lumigraph. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, 43–54, 1996.
DOI
[5]
Isaksen, A.; McMillan, L.; Gortler, S. J. Dynamically reparameterized light fields. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, 297–306, 2000.
DOI
[6]
Lin, Z. C.; Shum, H. Y. A geometric analysis of light field rendering.International Journal of Computer Vision Vol. 58, No. 2, 121–138, 2004.
[7]
Buehler, C.; Bosse, M.; McMillan, L.; Gortler, S.; Cohen, M. Unstructured lumigraph rendering. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, 425–432, 2001.
DOI
[8]
Debevec, P.; Yu, Y. Z.; Borshukov, G. Efficient view-dependent image-based rendering with projective texture-mapping. In: Rendering Techniques ’98. Eurographics.Drettakis, G.; Max, N. Eds. Springer Vienna, 105–116, 1998.
DOI
[9]
Brox, T.; Bruhn, A.; Papenberg, N.; Weickert, J. High accuracy optical flow estimation based on a theory for warping. In: Computer Vision - ECCV 2004. Lecture Notes in Computer Science, Vol. 3024. Pajdla, T.; Matas, J. Eds. Springer Berlin Heidelberg, 25–36, 2004.
DOI
[10]
Hirschmuller, H. Stereo processing by semiglobal matching and mutual information. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 30, No. 2, 328–341, 2008.
[11]
Anisimov, Y.; Wasenmüller, O.; Stricker, D. Rapid light field depth estimation with semi-global matching. In: Proceedings of the IEEE 15th International Conference on Intelligent Computer Communication and Processing, 109–116, 2019.
DOI
[12]
Sun, D. Q.; Roth, S.; Black, M. J. Secrets of optical flow estimation and their principles. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2432–2439, 2010.
DOI
[13]
Jiang, X.; Pendu, M. L.; Guillemot, C. Depth estimation with occlusion handling from a sparse set of light field views. In: Proceedings of the 25th IEEE International Conference on Image Processing, 634–638, 2018.
DOI
[14]
Kolmogorov, V.; Zabih, R. Multi-camera scene reconstruction via graph cuts. In: Computer Vision — ECCV 2002. Lecture Notes in Computer Science, Vol. 2352. Heyden, A.; Sparr, G.; Nielsen, M.; Johansen, P. Eds. Springer Berlin Heidelberg, 82–96, 2002.
DOI
[15]
Chen, Y.; Alain, M.; Smolic, A. Fast and accurate optical flow based depth map estimation from light fields. arXiv preprint arXiv:2008.04673v1, 2020.
[16]
Lin, H. T.; Chen, C.; Kang, S. B.; Yu, J. Y. Depth recovery from light field using focal stack symmetry. In: Proceedings of the IEEE International Conference on Computer Vision, 3451–3459, 2015.
DOI
[17]
Tao, M. W.; Hadap, S.; Malik, J.; Ramamoorthi, R. Depth from combining defocus and correspondence using light-field cameras. In: Proceedings of the IEEE International Conference on Computer Vision, 673–680, 2013.
DOI
[18]
Neri, A.; Carli, M.; Battisti, F. A multi-resolution approach to depth field estimation in dense image arrays. In: Proceedings of the IEEE International Conference on Image Processing, 3358–3362, 2015.
DOI
[19]
Sabater, N.; Boisson, G.; Vandame, B.; Kerbiriou, P.; Babon, F.; Hog, M.; Gendrot, R.; Langlois, T.; Bureller, O.; Schubert, A. Dataset and pipeline for multi-view light-field video. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 1743–1753, 2017.
DOI
[20]
Jeon, H. G.; Park, J.; Choe, G.; Park, J.; Bok, Y.; Tai, Y. W.; Kweon, I. S. Accurate depth map estimation from a lenslet light field camera. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1547–1555, 2015.
DOI
[21]
Shi, L. X.; Hassanieh, H.; Davis, A.; Katabi, D.; Durand, F. Light field reconstruction using sparsity in the continuous Fourier domain. ACM Transactions on Graphics Vol. 34, No. 1, Article No. 12, 2014.
[22]
Vagharshakyan, S.; Bregovic, R.; Gotchev, A. Light field reconstruction using shearlet transform. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 40, No. 1, 133–147, 2018.
[23]
Schedl, D. C.; Bimber, O. Optimized sampling for view interpolation in light fields with overlapping patches. Computer Vision and Image Understanding Vol. 168, 93–103, 2018.
[24]
Eigen, D.; Puhrsch, C.; Fergus, R. Depth map prediction from a single image using a multi-scale deep network. In: Proceedings of the 27th International Conference on Neural Information Processing Systems, Vol. 2, 2366–2374, 2014.
[25]
Peng, J.; Xiong, Z.; Liu, D.; Chen, X. Unsupervised depth estimation from light field using a convolutional neural network. In: Proceedings of the International Conference on 3D Vision, 295–303, 2018.
DOI
[26]
Eslami, S. M. A.; Jimenez Rezende, D.; Besse, F.; Viola, F.; Morcos, A. S.; Garnelo, M.; Ruderman, A.; Rusu, A. A.; Danihelka, I.; Gregor, K. et al. Neural scene representation and rendering. Science Vol. 360, No. 6394, 1204–1210, 2018.
[27]
Han, X. F.; Laga, H.; Bennamoun, M. Image-based 3D object reconstruction: State-of-the-art and trends in the deep learning era. IEEE Transactions on Pattern Analysis and Machine Intelligence doi: , 2019.
[28]
Ni, L. X.; Jiang, H. Y.; Cai, J. F.; Zheng, J. M.; Li, H. F.; Liu, X. Unsupervised dense light field reconstruction with occlusion awareness.Computer Graphics Forum Vol. 38, No. 7, 425–436, 2019.
[29]
Takahashi, K.; Kubota, A.; Naemura, T. All in-focus view synthesis from under-sampled light fields. In: Proceedings of the 13th International Conference on Artificial Reality and Telexistence, 2003.
[30]
Sugita, K.; Takahashi, K.; Naemura, T.; Harashima, H. Focus measurement on programmable graphics hardware for all in-focus rendering from light fields. In: Proceedings of the IEEE Virtual Reality, 255–256, 2004.
[31]
Kubota, A.; Takahashi, K.; Aizawa, K.; Chen, T. All-focused light field rendering. In: Proceedings of the 15th Eurographics Workshop on Rendering Techniques, 235–242, 2004.
[32]
Li, C.; Zhang, X. High dynamic range and all-focus image from light field. In: Proceedings of the IEEE 7th International Conference on Cybernetics and Intelligent Systems and IEEE Conference on Robotics, Automation and Mechatronics, 7–12, 2015.
DOI
[33]
Heigl, B.; Koch, R.; Pollefeys, M.; Denzler, J.; Gool, L. Plenoptic modeling and rendering from image sequences taken by a hand-held camera. In:Mustererkennung 1999. Informatik aktuell. Förstner, W.; Buhmann, J. M.; Faber, A.; Faber, P. Eds. Springer Berlin Heidelberg, 94–101, 1999.
DOI
[34]
Sanda Mahama, A.; Dossa, A.; Gouton, P. Choice of distance metrics for RGB color image analysis. Electronic Imaging Vol. 2016, No. 20, 1–4, 2016.
[35]
Welford, B. P. Note on a method for calculating corrected sums of squares and products. Technometrics Vol. 4, No. 3, 419–420, 1962.
[36]
Vaish, V.; Adams, A. The (new) stanford light field archive. 2008. Available at http://lightfield.stanford.edu/.
[37]
Rerabek, M.; Ebrahimi, T. New light field image dataset. In: Proceedings of the 8th International Conference on Quality of Multimedia Experience, 2016.
[38]
Cheggour, H. Blender demo files - barcelona pavillion.
DOI
[39]
Kalantari, N. K.; Wang, T.-C.; Ramamoorthi, R. Learning-based view synthesis for light field cameras. ACM Transactions on Graphics Vol. 35, No. 6, Article No. 193, 2016.
[40]
Banz, C.; Blume, H.; Pirsch, P. Real-time semi-global matching disparity estimation on the GPU. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, 514–521, 2011.
DOI
[41]
Chuchvara, A.; Barsi, A.; Gotchev, A. Fast and accurate depth estimation from sparse light fields. IEEE Transactions on Image Processing Vol. 29, 2492–2506, 2020.
[42]
Chai, J. X.; Chan, S. C.; Shum, H. Y.; Tong, X. Plenoptic sampling. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, 307–318, 2000.
DOI
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 10 November 2020
Accepted: 12 January 2021
Published: 27 February 2021
Issue date: September 2021

Copyright

© The Author(s) 2021

Acknowledgements

This work was supported by The Ministry of Education, Youth and Sports from the National Programme of Sustainability (NPU II) project IT4Innovations excellence in science, LQ1602.

Rights and permissions

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www. editorialmanager.com/cvmj.

Return