Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
We propose PortraitACG, a novel framework for user-guided portrait image editing that leverages an asymmetric conditional generative adversarial network (GAN), which supports the fine-grained editing of geometries, colors, lights, and shadows using a single neural network model. Existing conditional GAN-based approaches usually feed the same conditional information into generators and discriminators, which is sub-optimal because these two modules are designed for different purposes. To facilitate flexible user-guided editing, we propose a novel asymmetric conditional GAN, where the generators take the transformed conditional inputs, such as edge maps, color palettes, sliders, and masks, that can be directly edited by the user, and the discriminators take the conditional inputs in a way that can guide controllable image generation more effectively. This allows image editing operations to be performed in a simpler and more intuitive manner. For example, the user can directly use a color palette to specify the desired colors of hair, skin, eyes, lips, and background and use a slider to blend colors. Moreover, users can edit the lights and shadows by modifying their corresponding masks.
Fu, Q.; He, Y.; Hou, F.; Sun, Q.; Zeng, A.; Huang, Z.; Zhang, J.; Liu, Y. Poisson vector graphics (PVG)-guided face color transfer in videos. IEEE Computer Graphics and Applications Vol. 41, No. 6, 152–163, 2021.
Zhang, R.; Zhu, J.; Isola, P.; Geng, X.; Lin, A. S.; Yu, T.; Efros, A. A. Real-time user-guided image colorization with learned deep priors. ACM Transactions on Graphics Vol. 9, No. 4, Article No. 119, 2017.
Shamsolmoali, P.; Zareapoor, M.; Granger, E.; Zhou, H.; Wang, R.; Celebi, M. E.; Yang, J. Image synthesis with adversarial networks: A comprehensive survey and case studies. Information Fusion Vol. 72, 126–146, 2021.
Pang, Y.; Lin, J.; Qin, T.; Chen, Z. Image-to-image translation: Methods and applications. IEEE Transactions on Multimedia Vol. 24, 3859–3881, 2021.
Wu, Y.; Yang, Y. L.; Xiao, Q.; Jin, X. Coarse-to-fine: Facial structure editing of portrait images via latent space classifications. ACM Transactions on Graphics Vol. 40, No. 4, Article No. 46, 2021.
Chen, S.; Su, W.; Gao, L.; Xia, S. L.; Fu, H. DeepFaceDrawing: Deep generation of face images from sketches. ACM Transactions on Graphics Vol. 39, No. 4, Article No. 72, 2020.
Su, W.; Ye, H.; Chen, S. Y.; Gao, L.; Fu, H. DrawingInStyles: Portrait image generation and editing with spatially conditioned StyleGAN. IEEE Transactions on Visualization and Computer Graphics Vol. 29, No. 10, 4074–4088, 2023.
Liu, F.-L.; Chen, S.-Y.; Lai, Y.; Li, C.; Jiang, Y.-R.; Fu, H.; Gao, L. DeepFaceVideoEditing: Sketch-based deep editing of face videos. ACM Transactions on Graphics Vol. 41, No. 4, Article No. 167, 2022.
Wang, Z.; Yu, X.; Lu, M.; Wang, Q.; Qian, C.; Xu, F. Single image portrait relighting via explicit multiple reflectance channel modeling. ACM Transactions on Graphics Vol. 39, No. 6, Article No. 220, 2020.
Zhang, X. C.; Barron, J. T.; Tsai, Y. T.; Pandey, R.; Zhang X.; Ng, R.; Jacobs, D. E. Portrait shadow manipulation. ACM Transactions on Graphics Vol. 39, No. 4, Article No. 78, 2020.
Levin, A.; Lischinski, D.; Weiss, Y. Colorization using optimization. ACM Transactions on Graphics Vol. 23, No. 3, 689–694, 2004.
Reinhard, E.; Adhikhmin, M.; Gooch, B.; Shirley, P. Color transfer between images. IEEE Computer Graphics and Applications Vol. 21, No. 5, 34–41, 2001.
He, M.; Chen, D.; Liao, J.; Sander, P. V.; Yuan, L. Deep exemplar-based colorization. ACM Transactions on Graphics Vol. 37, No. 4, Article No. 47, 2018.
Chang, H.; Fried, O.; Liu, Y.; DiVerdi, S.; Finkelstein, A. Palette-based photo recoloring. ACM Transactions on Graphics Vol. 34, No. 4, Article No. 139, 2015.
Shu, Z.; Hadap, S.; Shechtman, E.; Sunkavalli, K.; Paris, S.; Samaras, D. Portrait lighting transfer using a mass transport approach. ACM Transactions on Graphics Vol. 36, No. 4, Article No. 1, 2017.
Canny, J. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. PAMI-8, No. 6, 679–698, 1986.
Shen, H.; Zheng, Z. Real-time highlight removal using intensity ratio. Applied Optics Vol. 52, No. 19, 4483–4493, 2013.
Chen, S.; Liu, F.; Lai, Y.; Rosin, P. L.; Li, C.; Fu, H.; Gao, L. DeepFaceEditing: Deep face generation and editing with disentangled geometry and appearance control. ACM Transactions on Graphics Vol. 40, No. 4, Article No. 90, 2021.
Yang, S.; Wang, Z.; Liu, J.; Guo, Z. Deep plastic surgery: Robust and controllable image editing with human-drawn sketches. ACM Transactions on Graphics Vol. 40, No. 4, Article No. 46, 2020.
Portenier, T.; Hu, Q.; Szab’o, A.; Bigdeli, S. A.; Favaro, P.; Zwicker, M. Faceshop: Deep sketch-based face image editing. ACM Transactions on Graphics, Vol. 37, No. 4, Article No. 99, 2018.
Pang, Y.; Xie, J.; Li, X. Visual haze removal by a unified generative adversarial network. IEEE Transactions on Circuits and Systems for Video Technology Vol. 29, No. 11, 3211–3221, 2019.
Sun, T.; Barron, J. T.; Tsai, Y. T.; Xu, Z.; Yu, X.; Fyffe, G.; Rhemann, C.; Busch, J.; Debevec, P.; Ramamoorthi, R. Single image portrait relighting. ACM Transactions on Graphics Vol. 38, No. 4, Article No. 79, 2019.
Wang, Z.; Bovik, A. C.; Sheikh, H. R.; Simoncelli, E. P. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing Vol. 13, No. 4, 600–612, 2004.
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
To submit a manuscript, please go to https://jcvm.org.