Journal Home > Volume 10 , Issue 1

The explosive growth of social media means portrait editing and retouching are in high demand. While portraits are commonly captured and stored as raster images, editing raster images is non-trivial and requires the user to be highly skilled. Aiming at developing intuitive and easy-to-use portrait editing tools, we propose a novel vectorization method that can automatically convert raster images into a 3-tier hierarchical representation. The base layer consists of a set of sparse diffusion curves (DCs) which characterize salient geometric features and low-frequency colors, providing a means for semantic color transfer and facial expression editing. The middle level encodes specular highlights and shadows as large, editable Poisson regions (PRs) and allows the user to directly adjust illumination by tuning the strength and changing the shapes of PRs. The top level contains two types of pixel-sized PRs for high-frequency residuals and fine details such as pimples and pigmentation. We train a deep generative model that can produce high-frequency residuals automatically. Thanks to the inherent meaning in vector primitives, editing portraits becomes easy and intuitive. In particular, our method supports color transfer, facial expression editing, highlight and shadow editing, and automatic retouching. To quantitatively evaluate the results, we extend the commonly used FLIP metric (which measures color and feature differences between two images) to consider illumination. The new metric, illumination-sensitive FLIP, can effectively capture salient changes in color transfer results, and is more consistent with human perception than FLIP and other quality measures for portrait images. We evaluate our method on the FFHQR dataset and show it to be effective for common portrait editing tasks, such as retouching, light editing, color transfer, and expression editing.


menu
Abstract
Full text
Outline
About this article

Hierarchical vectorization for facial images

Show Author's information Qian Fu1,2,*Linlin Liu3,*Fei Hou4,5Ying He1( )
School of Computer Science and Engineering, Nanyang Technological University, 639798, Singapore
Data61, Commonwealth Scientific and Industrial Research Organisation, Sydney 2015, Australia
Interdisciplinary Graduate School, Nanyang Technological University and Alibaba Group, 639798, Singapore
State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
University of Chinese Academy of Sciences, Beijing 100049, China

* Qian Fu and Linlin Liu contributed equally to this work.

Abstract

The explosive growth of social media means portrait editing and retouching are in high demand. While portraits are commonly captured and stored as raster images, editing raster images is non-trivial and requires the user to be highly skilled. Aiming at developing intuitive and easy-to-use portrait editing tools, we propose a novel vectorization method that can automatically convert raster images into a 3-tier hierarchical representation. The base layer consists of a set of sparse diffusion curves (DCs) which characterize salient geometric features and low-frequency colors, providing a means for semantic color transfer and facial expression editing. The middle level encodes specular highlights and shadows as large, editable Poisson regions (PRs) and allows the user to directly adjust illumination by tuning the strength and changing the shapes of PRs. The top level contains two types of pixel-sized PRs for high-frequency residuals and fine details such as pimples and pigmentation. We train a deep generative model that can produce high-frequency residuals automatically. Thanks to the inherent meaning in vector primitives, editing portraits becomes easy and intuitive. In particular, our method supports color transfer, facial expression editing, highlight and shadow editing, and automatic retouching. To quantitatively evaluate the results, we extend the commonly used FLIP metric (which measures color and feature differences between two images) to consider illumination. The new metric, illumination-sensitive FLIP, can effectively capture salient changes in color transfer results, and is more consistent with human perception than FLIP and other quality measures for portrait images. We evaluate our method on the FFHQR dataset and show it to be effective for common portrait editing tasks, such as retouching, light editing, color transfer, and expression editing.

Keywords: face editing, vectorization, Poisson editing, color transfer, illumination editing, expression editing

References(46)

[1]
Orzan, A.; Bousseau, A.; Winnemöller, H.; Barla, P.; Thollot, J.; Salesin, D. Diffusion curves. ACM Transactions on Graphics Vol. 27, No. 3, 1–8, 2008.
[2]
Finch, M.; Snyder, J.; Hoppe, H. Freeform vector graphics with controlled thin-plate splines. ACM Transactions on Graphics Vol. 30, No. 6, 1–10, 2011.
[3]
Xie, G. F.; Sun, X.; Tong, X.; Nowrouzezahrai, D. Hierarchical diffusion curves for accurate automatic image vectorization. ACM Transactions on Graphics Vol. 33, No. 6, Article No. 230, 2014.
[4]
Zhao, S.; Durand, F.; Zheng, C. X. Inverse diffusion curves using shape optimization. IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 7, 2153–2166, 2018.
[5]
Hou, F.; Sun, Q.; Fang, Z.; Liu, Y. J.; Hu, S. M.; Qin, H.; Hao, A. M.; He, Y. Poisson vector graphics (PVG). IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 2, 1361–1371, 2020.
[6]
Lee, C. H.; Liu, Z. W.; Wu, L. Y.; Luo, P. MaskGAN: Towards diverse and interactive facial image manipulation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5548–5557, 2020.
DOI
[7]
Shafaei, A.; Little, J. J.; Schmidt, M. AutoRetouch: Automatic professional face retouching. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 989–997, 2021.
DOI
[8]
Bell, S.; Bala, K.; Snavely, N. Intrinsic images in the wild. ACM Transactions on Graphics Vol. 33, No. 4, Article No. 159, 2014.
[9]
Cheng, Z. A.; Zheng, Y. Q.; You, S. D.; Sato, I. Non-local intrinsic decomposition with near-infrared priors. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2521–2530, 2019.
DOI
[10]
Zhou, H.; Yu, X.; Jacobs, D. GLoSH: Global-local spherical harmonics for intrinsic image decomposition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 7819–7828, 2019.
DOI
[11]
Sengupta, S.; Kanazawa, A.; Castillo, C. D.; Jacobs, D. W. SfSNet: Learning shape, reflectance and illuminance of faces ‘in the wild’. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6296–6305, 2018.
DOI
[12]
Shu, Z. X.; Yumer, E.; Hadap, S.; Sunkavalli, K.; Shechtman, E.; Samaras, D. Neural face editing with intrinsic image disentangling. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5444–5453, 2017.
DOI
[13]
Sun, J.; Liang, L.; Wen, F.; Shum, H. Y. Image vectorization using optimized gradient meshes. ACM Transactions on Graphics Vol. 26, No. 3, 11–es, 2007.
[14]
Lai, Y. K.; Hu, S. M.; Martin, R. R. Automatic and topology-preserving gradient mesh generation for image vectorization. ACM Transactions on Graphics Vol. 28, No. 3, Article No. 85, 2009.
[15]
Chen, K. W.; Luo, Y. S.; Lai, Y. C.; Chen, Y. L.; Yao, C. Y.; Chu, H. K.; Lee, T. Y. Image vectorization with real-time thin-plate spline. IEEE Transactions on Multimedia Vol. 22, No. 1, 15–29, 2020.
[16]
Liao, Z. C.; Hoppe, H.; Forsyth, D.; Yu, Y. Z. A subdivision-based representation for vector image editing. IEEE Transactions on Visualization and Computer Graphics Vol. 18, No. 11, 1858–1867, 2012.
[17]
Zhou, H. L.; Zheng, J. M.; Wei, L. Representing images using curvilinear feature driven subdivision surfaces. IEEE Transactions on Image Processing Vol. 23, No. 8, 3268–3280, 2014.
[18]
Zhang, S. H.; Chen, T.; Zhang, Y. F.; Hu, S. M.; Martin, R. R. Vectorizing cartoon animations. IEEE Transactions on Visualization and Computer Graphics Vol. 15, No. 4, 618–629, 2009.
[19]
Boyé, S.; Barla, P.; Guennebaud, G. A vectorial solver for free-form vector gradients. ACM Transactions on Graphics Vol. 31, No. 6, Article No. 173, 2012.
[20]
Canny, J. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. PAMI-8, No. 6, 679–698, 1986.
[21]
Lu, S. F.; Jiang, W.; Ding, X. F.; Kaplan, C. S.; Jin, X. G.; Gao, F.; Chen, J. Z. Depth-aware image vectorization and editing. The Visual Computer Vol. 35, Nos. 6–8, 1027–1039, 2019.
[22]
Shu, Z. X.; Hadap, S.; Shechtman, E.; Sunkavalli, K.; Paris, S.; Samaras, D. Portrait lighting transfer using a mass transport approach. ACM Transactions on Graphics Vol. 36, No. 4, Article No. 2, 2017.
[23]
Zhou, H.; Hadap, S.; Sunkavalli, K.; Jacobs, D. Deep single-image portrait relighting. In: Proceedings of theIEEE/CVF International Conference on Computer Vision, 7193–7201, 2019.
DOI
[24]
Zhang, X. M.; Fanello, S.; Tsai, Y. T.; Sun, T. C.; Xue, T. F.; Pandey, R.; Orts-Escolano, S.; Davidson, P.; Rhemann, C.; Debevec, P.; et al. Neural light transport for relighting and view synthesis. ACM Transactions on Graphics Vol. 40, No. 1, Article No. 9, 2021.
[25]
Fu, Q.; He, Y.; Hou, F.; Zhang, J. Y.; Zeng, A. X.; Liu, Y. J. Vectorization based color transfer for portrait images. Computer-Aided Design Vol. 115, 111–121, 2019.
[26]
Liao, J.; Yao, Y.; Yuan, L.; Hua, G.; Kang, S. B. Visual attribute transfer through deep image analogy. ACM Transactions on Graphics Vol. 36, No. 4, Article No. 120, 2017.
[27]
Afifi, M.; Brubaker, M. A.; Brown, M. S. HistoGAN: Controlling colors of GAN-generated and real images via color histograms. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7937–7946, 2021.
DOI
[28]
Dekel, T.; Gan, C.; Krishnan, D.; Liu, C.; Freeman, W. T. Sparse, smart contours to represent and edit images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3511–3520, 2018.
DOI
[29]
Lu, Z. H.; Hu, T. H.; Song, L. X.; Zhang, Z. X.; He, R. Conditional expression synthesis with face parsing transformation. In: Proceedings of the 26th ACM International Conference on Multimedia, 1083–1091, 2018.
DOI
[30]
Shih, Y.; Paris, S.; Barnes, C.; Freeman, W. T.; Durand, F. Style transfer for headshot portraits. ACM Transactions on Graphics Vol. 33, No. 4, Article No. 148, 2014.
[31]
Sheng, L.; Lin, Z. Y.; Shao, J.; Wang, X. G. Avatar-net: Multi-scale zero-shot style transfer by feature decoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8242–8250, 2018.
DOI
[32]
Portenier, T.; Hu, Q. Y.; Szabó, A.; Bigdeli, S. A.; Favaro, P.; Zwicker, M. Faceshop. ACM Transactions on Graphics Vol. 37, No. 4, Article No. 99, 2018.
[33]
Jo, Y.; Park, J. SC-FEGAN: Face editing generative adversarial network with user’s sketch and color. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 1745–1753, 2019.
DOI
[34]
Chen, S. Y.; Liu, F. L.; Lai, Y. K.; Rosin, P. L.; Li, C. P.; Fu, H. B.; Gao, L. DeepFaceEditing: Deep face generation and editing with disentangled geometry and appearance control. ACM Transactions on Graphics Vol. 40, No. 4, Article No. 90, 2021.
[35]
Thanh-Tung, H.; Tran, T. Catastrophic forgetting and mode collapse in GANs. In: Proceedings of the International Joint Conference on Neural Networks, 1–10, 2020.
DOI
[36]
Bang, D.; Shim, H. MGGAN: Solving mode collapse using manifold-guided training. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 2347–2356, 2021.
DOI
[37]
Shen, H. L.; Zheng, Z. H. Real-time highlight removal using intensity ratio. Applied Optics Vol. 52, No. 19, 4483, 2013.
[38]
Leordeanu, M.; Sukthankar, R.; Sminchisescu, C. Efficient closed-form solution to generalized boundary detection. In: Computer Vision – ECCV 2012. Lecture Notes in Computer Science, Vol. 7575. Fitzgibbon, A.; Lazebnik, S.; Perona, P.; Sato, Y.; Schmid, C. Eds. Springer Berlin Heidelberg, 516–529, 2012.
DOI
[39]
Soria, X.; Riba, E.; Sappa, A. Dense extreme inception network: Towards a robust CNN model for edge detection. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 1912–1921, 2020.
DOI
[40]
Xie, Q. Z.; Luong, M. T.; Hovy, E.; Le, Q. V. Self-training with noisy student improves ImageNet classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10684–10695, 2020.
DOI
[41]
Zoph, B.; Ghiasi, G.; Lin, T. Y.; Cui, Y.; Liu, H. X.; Cubuk, E. D.; Le, Q. V. Rethinking pre-training and self-training. In: Proceedings of the 34th International Conference on Neural Information Processing Systems, Article No. 323, 3833–3845, 2020.
[42]
Wang, T. C.; Liu, M. Y.; Zhu, J. Y.; Tao, A.; Kautz, J.; Catanzaro, B. High-resolution image synthesis and semantic manipulation with conditional GANs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8798–8807, 2018.
DOI
[43]
Andersson, P.; Nilsson, J.; Akenine-Möller, T.; Oskarsson, M.; Åström, K.; Fairchild, M. FLIP: A difference evaluator for alternating images. Proceedings of the ACM on Computer Graphics and Interactive Techniques Vol. 3, No. 2, Article No. 15, 2020.
[44]
Wang, Z.; Bovik, A. C.; Sheikh, H. R.; Simoncelli, E. P. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing Vol. 13, No. 4, 600–612, 2004.
[45]
Bi, S.; Han, X. G.; Yu, Y. Z. An L1 image transform for edge-preserving smoothing and scene-level intrinsic decomposition. ACM Transactions on Graphics Vol. 34, No. 4, Article No. 78, 2015.
[46]
Favreau, J. D.; Lafarge, F.; Bousseau, A. Photo2clipart: Image abstraction and vectorization using layered linear gradients. ACM Transactions on Graphics Vol. 36, No. 6, Article No. 180, 2017.
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 27 May 2022
Accepted: 21 September 2022
Published: 30 November 2023
Issue date: February 2024

Copyright

© The Author(s) 2023.

Acknowledgements

This project was supported by the Ministry of Education, Singapore, under its Academic Research Fund Tier 1 (RG20/20), the National Natural Science Foundation of China (61872347), and the Special Plan for the Development of Distinguished Young Scientists of ISCAS (Y8RC535018).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Return