References(42)
[1]
Alain, M.; Aenchbacher, W.; Smolic, A. Interactive light field tilt-shift refocus with generalized shift-and-sum. arXiv preprint arXiv:1910.04699, 2019.
[2]
Landy, M.; Movshon, J. A. The plenoptic function and the elements of early vision. In: Computational Models of Visual Processing. MIT Press, 3–20, 1991.
[3]
Levoy, M.; Hanrahan, P. Light field rendering. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, 31–42, 1996.
[4]
Gortler, S. J.; Grzeszczuk, R.; Szeliski, R.; Cohen, M. F. The lumigraph. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, 43–54, 1996.
[5]
Isaksen, A.; McMillan, L.; Gortler, S. J. Dynamically reparameterized light fields. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, 297–306, 2000.
[6]
Lin, Z. C.; Shum, H. Y. A geometric analysis of light field rendering.International Journal of Computer Vision Vol. 58, No. 2, 121–138, 2004.
[7]
Buehler, C.; Bosse, M.; McMillan, L.; Gortler, S.; Cohen, M. Unstructured lumigraph rendering. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, 425–432, 2001.
[8]
Debevec, P.; Yu, Y. Z.; Borshukov, G. Efficient view-dependent image-based rendering with projective texture-mapping. In: Rendering Techniques ’98. Eurographics.Drettakis, G.; Max, N. Eds. Springer Vienna, 105–116, 1998.
[9]
Brox, T.; Bruhn, A.; Papenberg, N.; Weickert, J. High accuracy optical flow estimation based on a theory for warping. In: Computer Vision - ECCV 2004. Lecture Notes in Computer Science, Vol. 3024. Pajdla, T.; Matas, J. Eds. Springer Berlin Heidelberg, 25–36, 2004.
[10]
Hirschmuller, H. Stereo processing by semiglobal matching and mutual information. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 30, No. 2, 328–341, 2008.
[11]
Anisimov, Y.; Wasenmüller, O.; Stricker, D. Rapid light field depth estimation with semi-global matching. In: Proceedings of the IEEE 15th International Conference on Intelligent Computer Communication and Processing, 109–116, 2019.
[12]
Sun, D. Q.; Roth, S.; Black, M. J. Secrets of optical flow estimation and their principles. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2432–2439, 2010.
[13]
Jiang, X.; Pendu, M. L.; Guillemot, C. Depth estimation with occlusion handling from a sparse set of light field views. In: Proceedings of the 25th IEEE International Conference on Image Processing, 634–638, 2018.
[14]
Kolmogorov, V.; Zabih, R. Multi-camera scene reconstruction via graph cuts. In: Computer Vision — ECCV 2002. Lecture Notes in Computer Science, Vol. 2352. Heyden, A.; Sparr, G.; Nielsen, M.; Johansen, P. Eds. Springer Berlin Heidelberg, 82–96, 2002.
[15]
Chen, Y.; Alain, M.; Smolic, A. Fast and accurate optical flow based depth map estimation from light fields. arXiv preprint arXiv:2008.04673v1, 2020.
[16]
Lin, H. T.; Chen, C.; Kang, S. B.; Yu, J. Y. Depth recovery from light field using focal stack symmetry. In: Proceedings of the IEEE International Conference on Computer Vision, 3451–3459, 2015.
[17]
Tao, M. W.; Hadap, S.; Malik, J.; Ramamoorthi, R. Depth from combining defocus and correspondence using light-field cameras. In: Proceedings of the IEEE International Conference on Computer Vision, 673–680, 2013.
[18]
Neri, A.; Carli, M.; Battisti, F. A multi-resolution approach to depth field estimation in dense image arrays. In: Proceedings of the IEEE International Conference on Image Processing, 3358–3362, 2015.
[19]
Sabater, N.; Boisson, G.; Vandame, B.; Kerbiriou, P.; Babon, F.; Hog, M.; Gendrot, R.; Langlois, T.; Bureller, O.; Schubert, A. Dataset and pipeline for multi-view light-field video. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 1743–1753, 2017.
[20]
Jeon, H. G.; Park, J.; Choe, G.; Park, J.; Bok, Y.; Tai, Y. W.; Kweon, I. S. Accurate depth map estimation from a lenslet light field camera. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1547–1555, 2015.
[21]
Shi, L. X.; Hassanieh, H.; Davis, A.; Katabi, D.; Durand, F. Light field reconstruction using sparsity in the continuous Fourier domain. ACM Transactions on Graphics Vol. 34, No. 1, Article No. 12, 2014.
[22]
Vagharshakyan, S.; Bregovic, R.; Gotchev, A. Light field reconstruction using shearlet transform. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 40, No. 1, 133–147, 2018.
[23]
Schedl, D. C.; Bimber, O. Optimized sampling for view interpolation in light fields with overlapping patches. Computer Vision and Image Understanding Vol. 168, 93–103, 2018.
[24]
Eigen, D.; Puhrsch, C.; Fergus, R. Depth map prediction from a single image using a multi-scale deep network. In: Proceedings of the 27th International Conference on Neural Information Processing Systems, Vol. 2, 2366–2374, 2014.
[25]
Peng, J.; Xiong, Z.; Liu, D.; Chen, X. Unsupervised depth estimation from light field using a convolutional neural network. In: Proceedings of the International Conference on 3D Vision, 295–303, 2018.
[26]
Eslami, S. M. A.; Jimenez Rezende, D.; Besse, F.; Viola, F.; Morcos, A. S.; Garnelo, M.; Ruderman, A.; Rusu, A. A.; Danihelka, I.; Gregor, K. et al. Neural scene representation and rendering. Science Vol. 360, No. 6394, 1204–1210, 2018.
[27]
Han, X. F.; Laga, H.; Bennamoun, M. Image-based 3D object reconstruction: State-of-the-art and trends in the deep learning era. IEEE Transactions on Pattern Analysis and Machine Intelligence doi: , 2019.
[28]
Ni, L. X.; Jiang, H. Y.; Cai, J. F.; Zheng, J. M.; Li, H. F.; Liu, X. Unsupervised dense light field reconstruction with occlusion awareness.Computer Graphics Forum Vol. 38, No. 7, 425–436, 2019.
[29]
Takahashi, K.; Kubota, A.; Naemura, T. All in-focus view synthesis from under-sampled light fields. In: Proceedings of the 13th International Conference on Artificial Reality and Telexistence, 2003.
[30]
Sugita, K.; Takahashi, K.; Naemura, T.; Harashima, H. Focus measurement on programmable graphics hardware for all in-focus rendering from light fields. In: Proceedings of the IEEE Virtual Reality, 255–256, 2004.
[31]
Kubota, A.; Takahashi, K.; Aizawa, K.; Chen, T. All-focused light field rendering. In: Proceedings of the 15th Eurographics Workshop on Rendering Techniques, 235–242, 2004.
[32]
Li, C.; Zhang, X. High dynamic range and all-focus image from light field. In: Proceedings of the IEEE 7th International Conference on Cybernetics and Intelligent Systems and IEEE Conference on Robotics, Automation and Mechatronics, 7–12, 2015.
[33]
Heigl, B.; Koch, R.; Pollefeys, M.; Denzler, J.; Gool, L. Plenoptic modeling and rendering from image sequences taken by a hand-held camera. In:Mustererkennung 1999. Informatik aktuell. Förstner, W.; Buhmann, J. M.; Faber, A.; Faber, P. Eds. Springer Berlin Heidelberg, 94–101, 1999.
[34]
Sanda Mahama, A.; Dossa, A.; Gouton, P. Choice of distance metrics for RGB color image analysis. Electronic Imaging Vol. 2016, No. 20, 1–4, 2016.
[35]
Welford, B. P. Note on a method for calculating corrected sums of squares and products. Technometrics Vol. 4, No. 3, 419–420, 1962.
[37]
Rerabek, M.; Ebrahimi, T. New light field image dataset. In: Proceedings of the 8th International Conference on Quality of Multimedia Experience, 2016.
[38]
Cheggour, H. Blender demo files - barcelona pavillion.
[39]
Kalantari, N. K.; Wang, T.-C.; Ramamoorthi, R. Learning-based view synthesis for light field cameras. ACM Transactions on Graphics Vol. 35, No. 6, Article No. 193, 2016.
[40]
Banz, C.; Blume, H.; Pirsch, P. Real-time semi-global matching disparity estimation on the GPU. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, 514–521, 2011.
[41]
Chuchvara, A.; Barsi, A.; Gotchev, A. Fast and accurate depth estimation from sparse light fields. IEEE Transactions on Image Processing Vol. 29, 2492–2506, 2020.
[42]
Chai, J. X.; Chan, S. C.; Shum, H. Y.; Tong, X. Plenoptic sampling. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, 307–318, 2000.