Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
In recent years, 3D face reconstruction has become a research hotspot in computer graphics and computer vision. Most current 3DMM-based methods focus on learning displacement maps to recover high-frequency facial details. However, they focus less on learning mid-frequency facial details and introduce displacement maps with noise, decreasing face reconstruction accuracy. Thus, this work presents a novel approach to regressing accurate and detailed 3D face shapes. First, we design a novel feature consistency loss to recover mid-frequency facial details. Specifically, we exploit the powerful CLIP as prior knowledge of faces to extract geometric and semantic features, which helps guide the reconstructed 3D geometric details to match local details in the input image. Furthermore, we propose a parameter refinement module to learn fine-grained features. It helps to obtain accurate model parameters and improve the accuracy of facial reconstruction. Extensive experiments on a FaceScape and a REALY benchmark demonstrate that our method outperforms several state-of-the-art methods in reconstruction accuracy. Furthermore, comprehensive qualitative results show that our approach achieves better visual performance than existing methods.

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
To submit a manuscript, please go to https://jcvm.org.
Comments on this article