Journal Home > Volume 9 , Issue 2

Recently, virtual reality (VR) technology has been widely used in medical, military, manufac-turing, entertainment, and other fields. These app-lications must simulate different complex material surfaces, various dynamic objects, and complex physical phenomena, increasing the complexity of VR scenes. Current computing devices cannot efficiently render these complex scenes in real time, and delayed rendering makes the content observed by the user inconsistent with the user’s interaction, causing discomfort. Foveated rendering is a promising technique that can accelerate rendering. It takes advantage of human eyes’ inherent features and renders different regions with different qualities without sacrificing perceived visual quality. Foveated rendering research has a history of 31 years and is mainly focused on solving the following three problems. The first is to apply perceptual models of the human visual system into foveated rendering. The second is to render the image with different qualities according to foveation principles. The third is to integrate foveated rendering into existing rendering paradigms to improve rendering performance. In this survey, we review foveated rendering research from 1990 to 2021. We first revisit the visual perceptual models related to foveated rendering. Subsequently, we propose a new foveated rendering taxonomy and then classify and review the research on this basis. Finally, we discuss potential opportunities and open questions in the foveated rendering field. We anticipate that this survey will provide new researchers with a high-level overview of the state-of-the-art in this field, furnish experts with up-to-date information, and offer ideas alongside a framework to VR display software and hardware designers and engineers.


menu
Abstract
Full text
Outline
About this article

Foveated rendering: A state-of-the-art survey

Show Author's information Lili Wang1,2,3Xuehuai Shi1( )Yi Liu1
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100000,China
Peng Cheng Laboratory, Shengzhen 518000, China
Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100000, China

Abstract

Recently, virtual reality (VR) technology has been widely used in medical, military, manufac-turing, entertainment, and other fields. These app-lications must simulate different complex material surfaces, various dynamic objects, and complex physical phenomena, increasing the complexity of VR scenes. Current computing devices cannot efficiently render these complex scenes in real time, and delayed rendering makes the content observed by the user inconsistent with the user’s interaction, causing discomfort. Foveated rendering is a promising technique that can accelerate rendering. It takes advantage of human eyes’ inherent features and renders different regions with different qualities without sacrificing perceived visual quality. Foveated rendering research has a history of 31 years and is mainly focused on solving the following three problems. The first is to apply perceptual models of the human visual system into foveated rendering. The second is to render the image with different qualities according to foveation principles. The third is to integrate foveated rendering into existing rendering paradigms to improve rendering performance. In this survey, we review foveated rendering research from 1990 to 2021. We first revisit the visual perceptual models related to foveated rendering. Subsequently, we propose a new foveated rendering taxonomy and then classify and review the research on this basis. Finally, we discuss potential opportunities and open questions in the foveated rendering field. We anticipate that this survey will provide new researchers with a high-level overview of the state-of-the-art in this field, furnish experts with up-to-date information, and offer ideas alongside a framework to VR display software and hardware designers and engineers.

Keywords: real-time rendering, foveated rendering, virtual reality (VR)

References(192)

[1]
Corrêa, C. G.; Nunes, F. L. S.; Bezerra, A.; Carvalho, P. M. Evaluation of VR medical training applications under the focus of professionals of the health area. In: Proceedings of the ACM Symposium on Applied Computing, 821–825, 2009.
DOI
[2]
Hsieh, M. C.; Lin, Y. H. VR and AR applications in medical practice and education. Hu Li Za Zhi Vol. 64, No. 6, 12–18, 2017.
[3]
Hsieh, M. C.; Lee, J.-J. Preliminary study of VR and AR applications in medical and healthcare education. Journal of Nursing and Health Studies Vol. 3, No. 1, 1, 2018.
[4]
Rizzo, A.; Morie, J. F.; Williams, J.; Pair, J.; Buckwalter, J. G. Human emotional state and its relevance for military VR training. In: Proceedings of the 11th International Conference on Human Computer Interaction, 2005.
[5]
Lele, A. Virtual reality and its military utility. Journal of Ambient Intelligence and Humanized Computing Vol. 4, No. 1, 17–26, 2013.
[6]
Ahir, K.; Govani, K.; Gajera, R.; Shah, M. Application on virtual reality for enhanced education learning, military training and sports. Augmented Human Research Vol. 5, No. 1, 7, 2020.
[7]
Ong, S. K.; Nee, A. Y. C. Virtual and Augmented Reality Applications in Manufacturing. London: Springer London, 2004.
DOI
[8]
Choi, S.; Jung, K.; Do Noh, S. Virtual reality applications in manufacturing industries: Past research, present findings, and future directions. Concurrent Engineering Vol. 23, No. 1, 40–63, 2015.
[9]
Doolani, S.; Wessels, C.; Kanal, V.; Sevastopoulos, C.; Jaiswal, A.; Nambiappan, H.; Makedon, F. A review of extended reality (XR) technologies for manufacturing training. Technologies Vol. 8, No. 4, 77, 2020.
[10]
Avila, L.; Bailey, M. Virtual reality for the masses. IEEE Computer Graphics and Applications Vol. 34, No. 5, 103–104, 2014.
[11]
Bialkova, S.; Van Gisbergen, M. S. When sound modulates vision: VR applications for art and entertainment. In: Proceedings of the IEEE 3rd Workshop on Everyday Virtual Reality, 1–6, 2017.
DOI
[12]
Saint-Louis, C.; Hamam, A. Survey of haptic technology and entertainment applications. In: Proceedings of the SoutheastCon, 1–7, 2021.
DOI
[13]
Puggioni, M. P.; Frontoni, E.; Paolanti, M.; Pierdicca, R.; Malinverni, E. S.; Sasso, M. A content creation tool for AR/VR applications in education: The ScoolAR framework. In: Augmented Reality, Virtual Reality, and Computer Graphics. Lecture Notes in Computer Science, Vol. 12243. De Paolis, L.; Bourdot, P. Eds. Springer Cham, 205–219, 2020.
[14]
Ferdani, D.; Fanini, B.; Piccioli, M. C.; Carboni, F.; Vigliarolo, P. 3D reconstruction and validation of historical background for immersive VR applications and games: The case study of the Forum of Augustus in Rome. Journal of Cultural Heritage Vol. 43, 129–143, 2020.
[15]
Tanenbaum, T. J.; Hartoonian, N.; Bryan, J. “How do I make this thing smile?”: An inventory of expressive nonverbal communication in commercial social virtual reality platforms. In: Proceedings of the CHI Conference on Human Factors in Computing Systems, 1–13, 2020.
DOI
[16]
Potter, M. C.; Wyble, B.; Hagmann, C. E.; McCourt, E. S. Detecting meaning in RSVP at 13 ms per picture. Attention, Perception, & Psychophysics Vol. 76, No. 2, 270–279, 2014.
[17]
Hendrickson, A. E.; Yuodelis, C. The morphological development of the human fovea. Ophthalmology Vol. 91, No. 6, 603–612, 1984.
[18]
Loschky, L. C.; McConkie, G. W.; Yang, J.; Miller, M. E. Perceptual effects of a gaze-contingent multi-resolution display based on a model of visual sensitivity. In: Proceedings of the ARL Federated Laboratory 5th Annual Symposium-ADID Consortium, 53–58, 2001.
[19]
Luebke, D.; Hallen, B. Perceptually driven simplification for interactive rendering. In: Rendering Techniques 2001. Eurographics. Gortler, S. J.; Myszkowski, K. Eds. Springer Vienna, 223–234, 2001.
DOI
[20]
Zheng, Z. P.; Yang, Z.; Zhan, Y. W.; Li, Y. Q.; Yu, W. X. Perceptual model optimized efficient foveated rendering. In: Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, 1–2, 2018.
DOI
[21]
Schütz, M.; Krösl, K.; Wimmer, M. Real-time continuous level of detail rendering of point clouds. In: Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces, 103–110, 2019.
DOI
[22]
Loschky, L. C.; McConkie, G. W. User performance with gaze contingent multiresolutional displays. In: Proceedings of the Symposium on Eye Tracking Research & Applications, 97–103, 2000.
DOI
[23]
Parkhurst, D. J.; Niebur, E. Variable-resolution displays: A theoretical, practical, and behavioral evaluation. Human Factors Vol. 44, No. 4, 611–629, 2002.
[24]
Duchowski, A. T.; House, D. H.; Gestring, J.; Wang, R. I.; Krejtz, K.; Krejtz, I.; Mantiuk, R.; Bazyluk, B. Reducing visual discomfort of 3D stereoscopic displays with gaze-contingent depth-of-field. In: Proceedings of the ACM Symposium on Applied Perception, 39–46, 2014.
DOI
[25]
Turner, E.; Jiang, H. M.; Saint-Macary, D.; Bastani, B. Phase-aligned foveated rendering for virtual reality headsets. In: Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces, 1–2, 2018.
DOI
[26]
Guenter, B.; Finch, M.; Drucker, S.; Tan, D.; Snyder, J. Foveated 3D graphics. ACM Transactions on Graphics Vol. 31, No. 6, Article No. 164, 2012.
[27]
Bastani, B.; Funt, B.; Vignaud, S.; Jiang, H. Smoothly varying foveated rendering. US Patent 10,546,364, 2020.
[28]
Stengel, M.; Grogorick, S.; Eisemann, M.; Magnor, M. Adaptive image-space sampling for gaze-contingent real-time rendering. Computer Graphics Forum Vol. 35, No. 4, 129–139, 2016.
[29]
Tursun, O. T.; Arabadzhiyska-Koleva, E.; Wernikowski, M.; Mantiuk, R.; Seidel, H. P.; Myszkowski, K.; Didyk, P. Luminance-contrast-aware foveated rendering. ACM Transactions on Graphics Vol. 38, No. 4, Article No. 98, 2019.
[30]
Tavakoli, M.; Khan, M.; Renschler, M.; Mondal, M. Scene-based foveated rendering of graphics content. US Patent 10,482,648, 2019.
[31]
Koskela, M.; Viitanen, T.; Jääskeläinen, P.; Takala, J. Foveated path tracing. In: Advances in Visual Computing. Lecture Notes in Computer Science, Vol. 10072. Springer Cham, 723–732, 2016.
[32]
Molenaar, E. N. Towards real-time ray tracing through foveated rendering. Master Thesis. University of Utrecht, 2018.
[33]
Koskela, M.; Lotvonen, A.; Mäkitalo, M.; Kivi, P.; Viitanen, T.; Jääskeläinen, P. Foveated real-time path tracing in visual-polar space. In: Eurographics Symposium on Rendering - DL-only and Industry Track. Boubekeur, T.; Sen, P. Eds. The Eurographics Association, 2019.
[34]
Koskela, M. Foveated path tracing with fast reconstruction and efficient sample distribution. Dissertation. Dissertation. Tampere University, 2020.
[35]
Levoy, M.; Whitaker, R. Gaze-directed volume rendering. In: Proceedings of the Symposium on Interactive 3D Graphics, 217–223, 1990.
DOI
[36]
Wang, L. L.; Li, R. Z.; Shi, X. H.; Yan, L. Q.; Li, Z. C. Foveated instant radiosity. In: Proceedings of the IEEE International Symposium on Mixed and Augmented Reality, 1–11, 2020.
DOI
[37]
Bruder, V.; Schulz, C.; Bauer, R.; Frey, S.; Weiskopf, D.; Ertl, T. Voronoi-based foveated volume rendering. In: EuroVis 2019 - Short Papers. Johansson, J.; Sadlo, F.; Marai, G. E. Eds. The Eurographics Association, 2019.
[38]
Kaplanyan, A. S.; Sochenov, A.; Leimkühler, T.; Okunev, M.; Goodall, T.; Rufo, G. DeepFovea: Neural reconstruction for foveated rendering and video compression using learned statistics of natural videos. ACM Transactions on Graphics Vol. 38, No. 6, Article No. 212, 2019.
[39]
Weier, M.; Stengel, M.; Roth, T.; Didyk, P.; Eisemann, E.; Eisemann, M.; Grogorick, S.; Hinkenjann, A.; Kruijff, E.; Magnor, M.; et al. Perception-driven accelerated rendering. Computer Graphics Forum Vol. 36, No. 2, 611–643, 2017.
[40]
Cline, D. Dictionary of Visual Science. Chilton Book Company, 1980.
[41]
Ivančić Valenko, S.; Cviljušac, V.; Modrić, D. The impact of physical parameters on the perception of the moving elements in peripheral part of the screen. Tehnički vjesnik Vol. 26 No. 5, 1444–1450, 2019.
[42]
Schaadt, A. K. Disorders of binocular convergent fusion and stereoscopic space perception following acquired brain damage: Treatment and neuro-anatomical implications. Dissertation. Universität des Saarlandes, 2015.
[43]
Strasburger, H.; Rentschler, I.; Jüttner, M. Peripheral vision and pattern recognition: A review. Journal of Vision Vol. 11, No. 5, 13, 2011.
[44]
Fender, D.; Julesz, B. Extension of panum’s fusional area in binocularly stabilized vision. Journal of the Optical Society of America Vol. 57, No. 6, 819–830, 1967.
[45]
Georgeson, M. A.; Wallis, S. A. Binocular fusion, suppression and diplopia for blurred edges. Ophthalmic and Physiological Optics Vol. 34, No. 2, 163–185, 2014.
[46]
Porac, C.; Coren, S. The dominant eye. Psychological Bulletin Vol. 83, No. 5, 880–897, 1976.
[47]
Robson, J. G. Spatial and temporal contrast-sensitivity functions of the visual system. Journal of the Optical Society of America Vol. 56, No. 8, 1141–1142, 1966.
[48]
Campbell, F. W.; Robson, J. G. Application of Fourier analysis to the visibility of gratings. The Journal of Physiology Vol. 197, No. 3, 551–566, 1968.
[49]
Kelly, D. H. Motion and vision. II. Stabilized spatio-temporal threshold surface. Journal of the Optical Society of America Vol. 69, No. 10, 1340–1349, 1979.
[50]
Mullen, K. T. The contrast sensitivity of human colour vision to red-green and blue-yellow chromatic gratings. The Journal of Physiology Vol. 359, No. 1, 381–400, 1985.
[51]
Geisler, W.; Perry, J. Real-time foveated multiresolutionsystem for low-bandwidth video communication. In: Proceedings of the SPIE 3299, Human Vision and Electronic Imaging III, 294–305, 1998.
DOI
[52]
Krajancich, B.; Kellnhofer, P.; Wetzstein, G. A perceptual model for eccentricity-dependent spatio-temporal flicker fusion and its applications to foveated graphics. ACM Transactions on Graphics Vol. 40, No. 4, Article No. 47, 2021.
[53]
Weymouth, F. W. Visual sensory units and the minimal angle of resolution. American Journal of Ophthalmology Vol. 46, No. 1, 102–113, 1958.
[54]
Weymouth, F. W. Visual sensory units and the minimum angle of resolution. Optometry and Vision Science Vol. 40, No. 9, 550–568, 1963.
[55]
Daniel, P. M.; Whitteridge, D. The representation of the visual field on the cerebral cortex in monkeys. The Journal of Physiology Vol. 159, No. 2, 203–221,1961.
[56]
Levi, D. M.; Klein, S. A.; Aitsebaomo, A. P. Vernier acuity, crowding and cortical magnification. Vision Research Vol. 25, No. 7, 963–977, 1985.
[57]
Nakayama, K. Properties of early motion processing: Implications for the sensing of egomotion. In: Perception and Control of Self-motion. Psychology Press, 93–104, 1990.
[58]
Ohshima, T.; Yamamoto, H.; Tamura, H. Gaze-directed adaptive rendering for interacting with virtual space. In: Proceedings of the IEEE Virtual Reality Annual International Symposium, 103–110, 2002.
[59]
Luebke, D.; Hallen, B.; Newfield, D.; Watson, B. Perceptually driven simplification using gaze-directed rendering. Technical Report CS-2000-04. Department of Computer Science, University of Virginia, 2000.
[60]
Parkhurst, D.; Law, I.; Niebur, E. Evaluating gaze-contingent level of detail rendering of virtual environments using visual search. 2001.
[61]
Vaidyanathan, K.; Salvi, M.; Toth, R.; Foley, T.; Akenine-Möller, T.; Nilsson, J.; Munkberg, J.; Hasselgren, J.; Sugihara, M.; Clarberg, P.; et al. Coarse pixel shading. In: Proceedings of the High Performance Graphics, 9–18, 2014.
[62]
Weier, M.; Roth, T.; Kruijff, E.; Hinkenjann, A.; Pérard-Gayot, A.; Slusallek, P.; Li, Y. Foveated real-time ray tracing for head-mounted displays. Computer Graphics Forum Vol. 35, No. 7, 289–298, 2016.
[63]
Mikkola, M.; Boev, A.; Gotchev, A. Relative importance of depth cues on portable autostereoscopic display. In: Proceedings of the 3rd Workshop on Mobile Video Delivery, 63–68, 2010.
DOI
[64]
Panum, P. L. Physiologische Untersuchungen über das Sehen mit zwei Augen. Schwer, 1858.
[65]
Mitchell, D. E. A review of the concept of “panum’s fusional areas”. Optometry and Vision Science Vol. 43, No. 6, 387–401, 1966.
[66]
Hillaire, S.; Lecuyer, A.; Cozot, R.; Casiez, G. Using an eye-tracking system to improve camera motions and depth-of-field blur effects in virtual environments. In: Proceedings of the IEEE Virtual Reality Conference, 47–50, 2008.
DOI
[67]
Mantiuk, R.; Bazyluk, B.; Tomaszewska, A. Gaze-dependent depth-of-field effect rendering in virtual environments. In: Serious Games Development and Applications. Lecture Notes in Computer Science, Vol. 6944. Ma, M.; Fradinho Oliveira, M.; Madeiras Pereira, J. Eds. Springer Berlin Heidelberg, 1–12, 2011.
[68]
Vinnikov, M.; Allison, R. S. Gaze-contingent depth of field in realistic scenes: The user experience. In: Proceedings of the Symposium on Eye Tracking Research and Applications, 119–126, 2014.
DOI
[69]
Mauderer, M.; Conte, S.; Nacenta, M. A.; Vishwanath, D. Depth perception with gaze-contingent depth of field. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 217–226, 2014.
DOI
[70]
Gupta, K.; Kazi, S. Gaze contingent depth of field display. 2016. Available at http://stanford.edu/class/ee367/Winter2016/Gupta_Kazi_Report.pdf.
[71]
Weier, M.; Roth, T.; Hinkenjann, A.; Slusallek, P. Foveated depth-of-field filtering in head-mounted displays. ACM Transactions on Applied Perception Vol. 15, No. 4, Article No. 26, 2018.
[72]
Kang, J.; Lee, J.; Shin, Y. G.; Kim, B. Depth-of-field rendering using progressive lens sampling in direct volume rendering. IEEE Access Vol. 8, 93335–93345, 2020.
[73]
Shneor, E.; Hochstein, S. Eye dominance effects in feature search. Vision Research Vol. 46, No. 25, 4258–4269, 2006.
[74]
Koçtekin B.; Gündoğan, N. Ü.; Altıntaş, A. G. K.; Yazıcı, A. C. Relation of eye dominancy with color vision discrimination performance ability in normal subjects. International Journal of Ophthalmology Vol. 6, No. 5, 733–738, 2013.
[75]
Meng, X. X.; Du, R. F.; Varshney, A. Eye-dominance-guided foveated rendering. IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 5, 1972–1980, 2020.
[76]
Owsley, C. Contrast sensitivity. Ophthalmology Clinics of North America Vol. 16, No. 2, 171–177, 2003.
[77]
Kim, K. J.; Mantiuk, R.; Lee, K. H. Measurements of achromatic and chromatic contrast sensitivity functions for an extended range of adaptation luminance. In: Proceedings of the SPIE 8651, Human Vision and Electronic Imaging XVIII, 86511A, 2013.
DOI
[78]
Chwesiuk, M.; Mantiuk, R. Measurements of contrast sensitivity for peripheral vision. In: Proceedings of the ACM Symposium on Applied Perception, 1–9, 2019.
DOI
[79]
Tyler, C. W.; Hamer, R. D. Analysis of visual modulation sensitivity. IV. Validity of the Ferry-Porter law. Journal of the Optical Society of America A Vol. 7, No. 4, 743–758, 1990.
[80]
Watson, A. B. Visual detection of spatial contrast patterns: Evaluation of five simple models. Optics Express Vol. 6, No. 1, 12–33, 2000.
[81]
Yee, H.; Pattanaik, S.; Greenberg, D. P. Spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments. ACM Transactions on Graphics Vol. 20, No. 1, 39–65, 2001.
[82]
Westland, S.; Owens, H.; Cheung, V.; Paterson-Stephens, I. Model of luminance contrast-sensitivity function for application to image assessment. Color Research & Application Vol. 31, No. 4, 315–319, 2006.
[83]
Fairchild, M. D. Color Appearance Models. John Wiley & Sons, 2013.
[84]
Schade, O. H. Optical and photoelectric analog of the eye. Journal of the Optical Society of America Vol. 46, No. 9, 721–739, 1956.
[85]
Xia, J. C.; Varshney, A. Dynamic view-dependent simplification for polygonal models. In: Proceedings of the 7th Annual IEEE Visualization, 327–334, 2009.
[86]
Hoppe, H. View-dependent refinement of progressive meshes. In: Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, 189–198, 1997.
DOI
[87]
Luebke, D.; Erikson, C. View-dependent simplification of arbitrary polygonal environments. In: Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, 199–208, 1997.
DOI
[88]
Patney, A.; Salvi, M.; Kim, J.; Kaplanyan, A.; Wyman, C.; Benty, N.; Luebke, D.; Lefohn, A. Towards foveated rendering for gaze-tracked virtual reality. ACM Transactions on Graphics Vol. 35, No. 6, Article No. 179, 2016.
[89]
Liu, Y. M.; Jiang, B. C. Contrast sensitivity measured during smooth pursuit movement. Science China Chemistry Vol. 27, No. 7, 710–721, 1984.
[90]
Flipse, J. P.; Wildt, G. J. V. D.; Rodenburg, M.; Keemink, C. J.; Knol, P. G. M. Contrast sensitivity for oscillating sine wave gratings during ocular fixation and pursuit. Vision Research Vol. 28, No. 7, 819–826, 1988.
[91]
Van Meeteren, A.; Vos, J. J. Resolution and contrast sensitivity at low luminances. Vision Research Vol. 12, No. 5, 825–833, 1972.
[92]
Mullen, K. T. Colour vision as a post-receptoral specialization of the central visual field. Vision Research Vol. 31, No. 1, 119–130, 1991.
[93]
Anderson, S. J.; Mullen, K. T.; Hess, R. F. Human peripheral spatial resolution for achromatic and chromatic stimuli: Limits imposed by optical and retinal factors. The Journal of Physiology Vol. 442, No. 1, 47–64, 1991.
[94]
Mullen, K. T.; Kingdom, F. A. A. Differential distributions of red-green and blue-yellow cone opponency across the visual field. Visual Neuroscience Vol. 19, No. 1, 109–118, 2002.
[95]
Mullen, K. T.; Sakurai, M.; Chu, W. Does L/M cone opponency disappear in human periphery? Perception Vol. 34, No. 8, 951–959, 2005.
[96]
Duchowski, A. T.; Çöltekin, A. Foveated gaze-contingent displays for peripheral LOD management, 3D visualization, and stereo imaging. ACM Transactions on Multimedia Computing, Communications, and Applications Vol. 3, No. 4, Article No. 6, 2007.
[97]
Tyler, C. W.; Hamer, R. D. Eccentricity and the ferry-porter law. Journal of the Optical Society of America A Vol. 10, No. 9, 2084–2087, 1993.
[98]
Spjut, J.; Boudaoud, B.; Kim, J.; Greer, T.; Albert, R.; Stengel, M.; Aksit, K.; Luebke, D. Toward standardized classification of foveated displays. IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 5, 2126–2134, 2020.
[99]
Duchowski, A. T.; Bate, D.; Stringfellow, P.; Thakur, K.; Melloy, B. J.; Gramopadhye, A. K. On spatiochromatic visual sensitivity and peripheral color LOD management. ACM Transactions on Applied Perception Vol. 6, No. 2, Article No. 9, 2009.
[100]
Funkhouser, T. A.; Séquin, C. H. Adaptive display algorithm for interactive frame rates during visualization of complex virtual environments. In: Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, 247–254, 1993.
DOI
[101]
Reddy, M. Perceptually optimized 3D graphics. IEEE Computer Graphics and Applications Vol. 21, No. 5, 68–75, 2001.
[102]
Murphy, H.; Duchowski, A. T. Gaze-contingent level of detail rendering. In: Proceedings of the Eurographics 2001 - Short Presentations, 2001.
[103]
Cheng, I. Foveated 3D model simplification. In: Proceedings of the 7th International Symposium on Signal Processing and Its Applications, 241–244, 2003.
DOI
[104]
Duchowski, A. T.; Cournia, N.; Murphy, H. Gaze-contingent displays: Review and current trends. 2004. Available at http://andrewd.ces.clemson.edu/gcd/adc04.pdf.
DOI
[105]
Reingold, E. M.; Loschky, L. C.; McConkie, G. W.; Stampe, D. M. Gaze-contingent multiresolutional displays: An integrative review. Human Factors Vol. 45, No. 2, 307–328, 2003.
[106]
Zhou, J. L.; Döring, A.; Tönnies, K. D. Distance based enhancement for focal region based volume rendering. In: Bildverarbeitung für die Medizin 2004. Informatik aktuell. Tolxdorff, T.; Braun, J.; Handels, H.; Horsch, A.; Meinzer, H. P. Eds. Springer Berlin Heidelberg, 199–203, 2004.
DOI
[107]
Yu, H.; Chang, E. C.; Huang, Z. Y.; Zheng, Z. J. Fast rendering of foveated volumes in wavelet-based representation. The Visual Computer Vol. 21, No. 8, 735–744, 2005.
[108]
Lu, A. D.; Maciejewski, R.; Ebert, D. Volume composition using eye tracking data. In: Proceedings of the 8th Joint Eurographics/IEEE VGTC Conference on Visualization, 115–122, 2006.
[109]
Hillaire, S.; Lécuyer, A.; Cozot, R.; Casiez, G. Depth-of-field blur effects for first-person navigation in virtual environments. IEEE Computer Graphics and Applications Vol. 28, No. 6, 47–55, 2008.
[110]
Murphy, H. A.; Duchowski, A. T.; Tyrrell, R. A. Hybrid image/model-based gaze-contingent rendering. ACM Transactions on Applied Perception Vol. 5, No. 4, Article No. 22, 2009.
[111]
Gallo, L.; Placitelli, A. P. High-fidelity visualization of large medical datasets on commodity hardware. ISRN Biomedical Engineering Vol. 2013, Article ID 892967, 2013.
[112]
Fujita, M.; Harada, T. Foveated real-time ray tracing for virtual reality headset. In: Proceedings of the SIGGRAPH Asia, 2014.
[113]
Patney, A.; Kim, J.; Salvi, M.; Kaplanyan, A.; Wyman, C.; Benty, N.; Lefohn, A.; Luebke, D. Perceptually-based foveated virtual reality. In: Proceedings of the ACM SIGGRAPH Emerging Technologies, 1–2, 2016.
DOI
[114]
Swafford, N. T.; Iglesias-Guitian, J. A.; Koniaris, C.; Moon, B.; Cosker, D.; Mitchell, K. User, metric, and computational evaluation of foveated rendering methods. In: Proceedings of the ACM Symposium on Applied Perception, 7–14, 2016.
DOI
[115]
Pai, Y. S.; Tag, B.; Outram, B.; Vontin, N.; Sugiura, K.; Kai, K. Z. GazeSim: Simulating foveated rendering using depth in eye gaze for VR. In: Proceedings of the ACM SIGGRAPH Posters, 1–2, 2016.
DOI
[116]
Lindeberg, T. Concealing rendering simplifications using gaze contingent depth of field. Master Thesis. KTH Royal Institute of Technology, 2016.
[117]
Albert, R.; Patney, A.; Luebke, D.; Kim, J. Latency requirements for foveated rendering in virtual reality. ACM Transactions on Applied Perception Vol. 14, No. 4, Article No. 25, 2017.
[118]
Blackmon, S.; Peterson, L. T.; Ozdas, C.; Clohset, S. J. Foveated rendering. US Patent App. 15/372,589, 2017.
[119]
Koskela, M.; Immonen, K.; Viitanen, T.; Jääskeläinen, P.; Multanen, J.; Takala, J. Foveated instant preview for progressive rendering. In: Proceedings of the SIGGRAPH Asia Technical Briefs, 1–4, 2017.
DOI
[120]
Hsu, C. F.; Chen, A.; Hsu, C. H.; Huang, C. Y.; Lei, C. L.; Chen, K. T. Is foveated rendering perceivable in virtual reality? Exploring the efficiency and consistency of quality assessment methods. In: Proceedings of the 25th ACM International Conference on Multimedia, 55–63, 2017.
DOI
[121]
Sun, Q.; Huang, F. C.; Kim, J.; Wei, L. Y.; Luebke, D.; Kaufman, A. Perceptually-guided foveation for light field displays. ACM Transactions on Graphics Vol. 36, No. 6, Article No. 192, 2017.
[122]
Lungaro, P.; Sjoberg, R.; Valero, A. J. F.; Mittal, A.; Tollmar, K. Gaze-aware streaming solutions for the next generation of mobile VR experiences. IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 4, 1535–1544, 2018.
[123]
Meng, X. X.; Du, R. F.; Zwicker, M.; Varshney, A. Kernel foveated rendering. Proceedings of the ACM on Computer Graphics and Interactive Techniques Vol. 1, No. 1, Article No. 5, 2018.
[124]
Koskela, M. K.; Immonen, K. V.; Viitanen, T. T.; Jääskeläinen, P. O.; Multanen, J. I.; Takala, J. H. Instantaneous foveated preview for progressive Monte Carlo rendering. Computational Visual Media Vol. 4, No. 3, 267–276, 2018.
[125]
Tan, G. J.; Lee, Y. H.; Zhan, T.; Yang, J. L.; Liu, S.; Zhao, D. F.; Wu, S. T. Foveated imaging for near-eye displays. Optics Express Vol. 26, No. 19, 25076–25085, 2018.
[126]
Wilson, A.; Lanman, D. R.; Trail, N. D.; McEldowney, S. C.; McNally, S. J.; Sulai, Y. N. B. Rendering composite content on a head mounted display including a high resolution inset. US Patent 9,972,071, 2018.
[127]
Young, A.; Ho, C.; Stafford, J. R. Foveal adaptation of particles and simulation models in a foveated rendering system. US Patent 10,339,692, 2019.
[128]
Wei, L. J.; Sakamoto, Y. Fast calculation method with foveated rendering for computer-generated holograms using an angle-changeable ray-tracing method. Applied Optics Vol. 58, No. 5, A258–A266, 2019.
[129]
Young, A.; Stafford, J. R. Real-time user adaptive foveated rendering. US Patent 10,192,528, 2019.
[130]
Stafford, J. R.; Young, A. Selective peripheral vision filtering in a foveated rendering system. US Patent 10,169,846, 2019.
[131]
Friston, S.; Ritschel, T.; Steed, A. Perceptual rasterization for head-mounted display image synthesis. ACM Transactions on Graphics Vol. 38, No. 4, Article No. 97, 2019.
[132]
Radkowski, R.; Raul, S. Impact of foveated rendering on procedural task training. In: Virtual, Augmented and Mixed Reality. Multimodal Interaction. Lecture Notes in Computer Science, Vol. 11574. Chen, J.; Fragomeni, G. Eds. Springer Cham, 258–267, 2019.
[133]
Siekawa, A.; Chwesiuk, M.; Mantiuk, R.; Piórkowski, R. Foveated ray tracing for VR headsets. In: MultiMedia Modeling. Lecture Notes in Computer Science, Vol. 11295. Kompatsiaris, I.; Huet, B.; Mezaris, V.; Gurrin, C.; Cheng, W. H.; Vrochidis, S. Eds. Springer Cham, 106–117, 2018.
[134]
Kim, J.; Jeong, Y.; Stengel, M.; Akşit, K.; Albert, R.; Boudaoud, B.; Greer, T.; Kim, J.; Lopes, W.; Majercik, Z.; et al. Foveated AR. ACM Transactions on Graphics Vol. 38, No. 4, Article No. 99, 2019.
[135]
Lee, J. S.; Kim, Y. K.; Lee, M. Y.; Won, Y. H. Enhanced see-through near-eye display using time-division multiplexing of a Maxwellian-view and holographic display. Optics Express Vol. 27, No. 2, 689–701, 2019.
[136]
Young, A.; Ho, C.; Stafford, J. R. Optimized shadows in a foveated rendering system. US Patent 10,650,544, 2020.
[137]
Ananpiriyakul, T.; Anghel, J.; Potter, K.; Joshi, A. A gaze-contingent system for foveated multiresolution visualization of vector and volumetric data. Electronic Imaging Vol. 32, No. 1, 374, 2020.
[138]
Konrad, R.; Angelopoulos, A.; Wetzstein, G. Gaze-contingent ocular parallax rendering for virtual reality. ACM Transactions on Graphics Vol. 39, No. 2, Article No. 10, 2020.
[139]
Joshi, Y.; Poullis, C. Inattentional blindness for redirected walking using dynamic foveated rendering. IEEE Access Vol. 8, 39013–39024, 2020.
[140]
Meng, X. X.; Du, R. F.; JaJa, J. F.; Varshney, A. 3D-kernel foveated rendering for light fields. IEEE Transactions on Visualization and Computer Graphics Vol. 27, No. 8, 3350–3360, 2021.
[141]
Frieß, F.; Braun, M.; Bruder, V.; Frey, S.; Reina, G.; Ertl, T. Foveated encoding for large high-resolution displays. IEEE Transactions on Visualization and Computer Graphics Vol. 27, No. 2, 1850–1859, 2021.
[142]
Yoo, C.; Xiong, J. H.; Moon, S.; Yoo, D.; Lee, C. K.; Wu, S. T.; Lee, B. Foveated display system based on a doublet geometric phase lens. Optics Express Vol. 28, No. 16, 23690–23702, 2020.
[143]
Bitterli, B.; Wyman, C.; Pharr, M.; Shirley, P.; Lefohn, A.; Jarosz, W. Spatiotemporal reservoir resampling for real-time ray tracing with dynamic direct lighting. ACM Transactions on Graphics Vol. 39, No. 4, Article No. 148, 2020.
[144]
Deza, A.; Konkle, T. Emergent properties of foveated perceptual systems. arXiv preprint arXiv:2006.07991, 2020.
[145]
Yang, Q. Q.; Chen, Z. X.; Liu, Y. L.; Xing, G. Y.; Zhang, Y. C. Foveated light culling. Computers & Graphics Vol. 97, 200–207, 2021.
[146]
Franke, L.; Fink, L.; Martschinke, J.; Selgrad, K.; Stamminger, M. Time-warped foveated rendering for virtual reality headsets. Computer Graphics Forum Vol. 40, No. 1, 110–123, 2021.
[147]
Surace, L.; Wernikowski, M.; Tursun, C.; Myszkowski, K.; Mantiuk, R.; Didyk, P. Learning foveated reconstruction to preserve perceived image statistics. arXiv preprint arXiv:2108.03499, 2021.
[148]
Kim, Y.; Ko, Y.; Ihm, I. Selective foveated ray tracing for head-mounted displays. In: Proceedings of the IEEE International Symposium on Mixed and Augmented Reality, 413–421, 2021.
DOI
[149]
Liu, J. Y.; Mantel, C.; Forchhammer, S. Perception-driven hybrid foveated depth of field rendering for head-mounted displays. In: Proceedings of the IEEE International Symposium on Mixed and Augmented Reality, 1–10, 2021.
DOI
[150]
Walton, D. R.; Dos Anjos, R. K.; Friston, S.; Swapp, D.; Akşit, K.; Steed, A.; Ritschel, T. Beyond blur: Real-time ventral metamers for foveated rendering. ACM Transactions on Graphics Vol. 40, No. 4, Article No. 48, 2021.
[151]
Li, D.; Du, R. F.; Babu, A.; Brumar, C. D.; Varshney, A. A log-rectilinear transformation for foveated 360-degree video streaming. IEEE Transactions on Visualization and Computer Graphics Vol. 27, No. 5, 2638–2647, 2021.
[152]
Shi, X. H.; Wang, L. L.; Wei, X. H.; Yan, L. Q. Foveated photon mapping. IEEE Transactions on Visualization and Computer Graphics Vol. 27, No. 11, 4183–4193, 2021.
[153]
Chakravarthula, P.; Zhang, Z.; Tursun, O.; Didyk, P.; Sun, Q.; Fuchs, H. Gaze-contingent retinal speckle suppression for perceptually-matched foveated holographic displays. IEEE Transactions on Visualization and Computer Graphics Vol. 27, No. 11, 4194–4203, 2021.
[154]
Jindal, A.; Wolski, K.; Myszkowski, K.; Mantiuk, R. K. Perceptual model for adaptive local shading and refresh rate. ACM Transactions on Graphics Vol. 40, No. 6, Article No. 281, 2021.
[155]
Alwani, R. Microsoft and Nvidia tech to bring photorealistic games with ray tracing. 2018. Availableat https://www.gadgets360.com/laptops/news/microsoft-dxr-nvidia-rtx-ray-tracing-volta-gpu-metro-exodus-1826988.
[156]
Sanzharov, V. V.; Frolov, V. A.; Galaktionov, V. A. Survey of nvidia RTX technology. Programming and Computing Software Vol. 46, No. 4, 297–304, 2020.
[157]
Mukhina, K.; Bezgodov, A. The method for real-time cloud rendering. Procedia Computer Science Vol. 66, 697–704, 2015.
[158]
Clark, J. H. Hierarchical geometric models for visible surface algorithms. Communications of the ACM Vol. 19, No. 10, 547–554, 1976.
[159]
Luebke, D.; Reddy, M.; Cohen, J. D.; Varshney, A.; Watson, B.; Huebner, R. Temporal detail. In: Level of Detail for 3D Graphics. Amsterdam: Elsevier, 301–329, xvii, 2003.
DOI
[160]
Hoppe, H. Progressive meshes. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, 99–108, 1996.
DOI
[161]
Rovamo, J.; Virsu, V. An estimation and application of the human cortical magnification factor. Experimental Brain Research Vol. 37, No. 3, 495–510, 1979.
[162]
Ramasubramanian, M.; Pattanaik, S. N.; Greenberg, D. P. A perceptually based physical error metric for realistic image synthesis. In: Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, 73–82, 1999.
DOI
[163]
Myszkowski, K.; Tawara, T.; Akamine, H.; Seidel, H. P. Perception-guided global illumination solution for animation rendering. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, 221–230, 2001.
DOI
[164]
Furnas, G. W. Generalized fisheye views. ACM SIGCHI Bulletin Vol. 17, No. 4, 16–23, 1986.
[165]
Sarkar, M.; Brown, M. H. Graphical fisheye views of graphs. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 83–91, 1992.
DOI
[166]
Lamping, J.; Rao, R.; Pirolli, P. A focus+context technique based on hyperbolic geometry for visualizing large hierarchies. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 401–408, 1995.
DOI
[167]
Carpendale, T.; Cowperthwaite, D. J.; Fracchia, F. D. Distortion viewing techniques for 3-dimensional data. In: Proceedings of the IEEE Symposium on Information Visualization, 46–53, 1996.
[168]
Plaisant, C.; Grosjean, J.; Bederson, B. B. SpaceTree: Supporting exploration in large node link tree, design evolution and empirical evaluation. In: Proceedings of the IEEE Symposium on Information Visualization, 57–64, 2002.
DOI
[169]
Kosara, R.; Miksch, S.; Hauser, H. Focus context taken literally. IEEE Computer Graphics and Applications Vol. 22, No. 1, 22–29, 2002.
[170]
Munzner, T.; Guimbretière, F.; Tasiran, S.; Zhang, L.; Zhou, Y. H. TreeJuxtaposer: Scalable tree comparison using Focus+Context with guaranteed visibility. ACM Transactions on Graphics Vol. 22, No. 3, 453–462, 2003.
[171]
Viola, I.; Kanitsar, A.; Groller, M. E. Importance-driven volume rendering. In: Proceedings of the IEEE Visualization, 139–145, 2004.
[172]
Cater, K.; Chalmers, A.; Ledda, P. Selective quality rendering by exploiting human inattentional blindness: Looking but not seeing. In: Proceedings of the ACM Symposium on Virtual Reality Software and Technology, 17–24, 2002.
DOI
[173]
Cater, K.; Chalmers, A.; Ward, G. Detail to attention: Exploiting visual tasks for selective rendering. In: Proceedings of the 14th Eurographics Workshop on Rendering, 270–280, 2003.
[174]
Sundstedt, V.; Chalmers, A.; Cater, K. Selective rendering of task related scenes. In: Proceedings of the Symposium on Applied Perception in Graphics and Visualization, 174, 2004.
DOI
[175]
Sundstedt, V.; Chalmers, A.; Cater, K.; Debattista, K. Top-down visual attention for efficient rendering of task related scenes. In: Proceedings of the Vision, Modeling and Visualization, 209–216, 2004.
DOI
[176]
Duchowski, A. T.; McCormick, B. H. Simple multiresolution approach for representing multiple regions of interest (ROIs). In: Proceedings of the SPIE 2501, Visual Communications and Image Processing, 175–186, 1995.
DOI
[177]
Geisler, W. S.; Perry, J. S. Variable-resolution displays for visual communication and simulation. SID Symposium Digest of Technical Papers Vol. 30, No. 1, 420–423, 1999.
[178]
Parkhurst, D.; Culurciello, E.; Niebur, E. Evaluating variable resolution displays with visual search: Task performance and eye movements. In: Proceedings of the Symposium on Eye Tracking Research & Applications, 105–109, 2000.
DOI
[179]
Geisler, W. S.; Perry, J. S. Real-time simulation of arbitrary visual fields. In: Proceedings of the Symposium on Eye Tracking Research & Applications, 83–87, 2002.
DOI
[180]
Willberger, T.; Musterle, C.; Bergmann, S. Deferred hybrid path tracing. In: Ray Tracing Gems. Berkeley: Apress, 475–492, 2019.
DOI
[181]
Jin, B.; Ihm, I.; Chang, B.; Park, C.; Lee, W.; Jung, S. Selective and adaptive supersampling for real-time ray tracing. In: Proceedings of the Conference on High Performance Graphics, 117–125, 2009.
DOI
[182]
Koskela, M.; Immonen, K.; Mäkitalo, M.; Foi, A.; Viitanen, T.; Jääskeläinen, P.; Kultala, H.; Takala, J. Blockwise multi-order feature regression for real-time path-tracing reconstruction. ACM Transactions on Graphics Vol. 38, No. 5, Article No. 138, 2019.
[183]
Sherrington, C. S. On Reciprocal Action in the Retina as studied by means of some Rotating Discs. The Journal of Physiology Vol. 21, No. 1, 33–54, 1897.
[184]
Arabadzhiyska, E.; Tursun, O. T.; Myszkowski, K.; Seidel, H. P.; Didyk, P. Saccade landing position prediction for gaze-contingent rendering. ACM Transactions on Graphics Vol. 36, No. 4, Article No. 50, 2017.
[185]
Mlot, E. G.; Bahmani, H.; Wahl, S.; Kasneci, E. 3D gaze estimation using eye vergence. In: Proceedings of the International Joint Conference on Biomedical Engineering Systems and Technologies, 125–131, 2016.
[186]
Veach, E. Robust Monte Carlo methods for light transport simulation. Ph.D. Thesis. Stanford University, 1997.
[187]
Georgiev, I.; Křivánek, J.; Davidovič, T.; Slusallek, P. Light transport simulation with vertex connection and merging. ACM Transactions on Graphics Vol. 31, No. 6, Article No. 192, 2012.
[188]
Lee, S.; Pattichis, M. S.; Bovik, A. C. Foveated video quality assessment. IEEE Transactions on Multimedia Vol. 4, No. 1, 129–132, 2002.
[189]
Wang, Z.; Conrad Bovik, A.; Lu, L.; Kouloheris, J. L. Foveated wavelet image quality index. In: Proceedings of the SPIE 4472, Applications of Digital Image Processing XXIV, 2001.
[190]
You, J. Y.; Ebrahimi, T.; Perkis, A. Attention driven foveated video quality assessment. IEEE Transactions on Image Processing Vol. 23, No. 1, 200–213, 2014.
[191]
Rimac-Drlje, S.; Vranješ, M.; Žagar, D. Foveated mean squared error—A novel video quality metric. Multimedia Tools and Applications Vol. 49, No. 3, 425–445, 2010.
[192]
Tsai, W. J.; Liu, Y. S. Foveation-based image quality assessment. In: Proceedings of the IEEE Visual Communications and Image Processing Conference, 25–28, 2014.
DOI
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 21 December 2021
Accepted: 29 July 2022
Published: 03 January 2023
Issue date: June 2023

Copyright

© The Author(s) 2022.

Acknowledgements

This work was supported by National Key R&D Program of China (2019YFC1521102), the National Natural Science Foundation of China (61932003), and Beijing Science and Technology Plan (Z221100007722004).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Return