Journal Home > Volume 8 , Issue 2

Existing color editing algorithms enable users to edit the colors in an image according to their own aesthetics. Unlike artists who have an accurate grasp of color, ordinary users are inexperienced in color selection and matching, and allowing non-professional users to edit colors arbitrarily may lead to unrealistic editing results. To address this issue, we introduce a palette-based approach for realistic object-level image recoloring. Our data-driven approach consists of an offline learning part that learns the color distributions for different objects in the real world, and an online recoloring part that first recognizes the object category, and then recommends appropriate realistic candidate colors learned in the offline step for that category. We also provide an intuitive user interface for efficient color manipulation. After color selection, image matting is performed to ensure smoothness of the object boundary. Comprehensive evaluation on various color editing examples demonstrates that our approach outperforms existing state-of-the-art color editing algorithms.


menu
Abstract
Full text
Outline
About this article

Towards natural object-based image recoloring

Show Author's information Meng-Yao Cui1Zhe Zhu2Yulu Yang1Shao-Ping Lu1( )
TKLNDST, CS, Nankai University, Tianjin 300350, China
Department of Radiology, Duke University, Durham, NC 27705, USA

Abstract

Existing color editing algorithms enable users to edit the colors in an image according to their own aesthetics. Unlike artists who have an accurate grasp of color, ordinary users are inexperienced in color selection and matching, and allowing non-professional users to edit colors arbitrarily may lead to unrealistic editing results. To address this issue, we introduce a palette-based approach for realistic object-level image recoloring. Our data-driven approach consists of an offline learning part that learns the color distributions for different objects in the real world, and an online recoloring part that first recognizes the object category, and then recommends appropriate realistic candidate colors learned in the offline step for that category. We also provide an intuitive user interface for efficient color manipulation. After color selection, image matting is performed to ensure smoothness of the object boundary. Comprehensive evaluation on various color editing examples demonstrates that our approach outperforms existing state-of-the-art color editing algorithms.

Keywords: object recognition, color editing, color palette representation, natural color

References(60)

[1]
Nguyen, R. M. H.; Price, B.; Cohen, S.; Brown, M. S. Group-theme recoloring for multi-image color consistency. Computer Graphics Forum Vol. 36, No. 7, 83-92, 2017.
[2]
Jing, Y. C.; Yang, Y. Z.; Feng, Z. L.; Ye, J. W.; Yu, Y. Z.; Song, M. L. Neural style transfer: A review. IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 11, 3365-3385, 2020.
[3]
Duchêne, S.; Aliaga, C.; Pouli, T.; Pérez, P. Mixed illumination analysis in single image for interactive color grading. In: Proceedings of the Symposium on Non-Photorealistic Animation and Rendering, Article No. 10, 2017.
DOI
[4]
Huang, J.; Zhou, S. Z.; Zhu, X. Y.; Li, Y. W.; Zhou, C. F. Automatic image style transfer using emotion-palette. In: Proceedings of the 10th International Conference on Digital Image Processing, 1197-1206, 2018.
DOI
[5]
Tan, J. C.; Echevarria, J.; Gingold, Y. Efficient palette-based decomposition and recoloring of images via RGBXY-space geometry. ACM Transactions on Graphics Vol. 37, No. 6, Article No. 262, 2019.
[6]
Wang, Y. L.; Liu, Y. F.; Xu, K. An improved geometric approach for palette-based image decomposition and recoloring. Computer Graphics Forum Vol. 38, No. 7, 11-22, 2019.
[7]
Lee, J.; Son, H.; Lee, G.; Lee, J.; Cho, S.; Lee, S. Deep color transfer using histogram analogy. The Visual Computer Vol. 36, Nos. 10-12, 2129-2143, 2020.
[8]
Kang, J. M.; Hwang, Y. Hierarchical palette extraction based on local distinctiveness and cluster validation for image recoloring. In: Proceedings of the 25th IEEE International Conference on Image Processing, 2252-2256, 2018.
DOI
[9]
An, X. B.; Pellacini, F. AppProp: All-pairs appearance-space edit propagation. ACM Transactions on Graphics Vol. 27, No. 3, 1-9, 2008.
[10]
Xu, K.; Li, Y.; Ju, T.; Hu, S. M.; Liu, T. Q. Efficient affinity-based edit propagation using K-D tree. ACM Transactions on Graphics Vol. 28, No. 5, 1-6, 2009.
[11]
Chang, H. W.; Fried, O.; Liu, Y. M.; DiVerdi, S.; Finkelstein, A. Palette-based photo recoloring. ACM Transactions on Graphics Vol. 34, No. 4, Article No. 139, 2015.
[12]
Tan, J. C.; DiVerdi, S.; Lu, J. W.; Gingold, Y. Pigmento: Pigment-based image analysis and editing. IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 9, 2791-2803, 2019.
[13]
Zhang, Q.; Xiao, C. X.; Sun, H. Q.; Tang, F. Palette-based image recoloring using color decomposition optimization. IEEE Transactions on Image Processing Vol. 26, No. 4, 1952-1964, 2017.
[14]
Cho, J.; Yun, S.; Lee, K.; Choi, J. Y. PaletteNet: Image recolorization with given color palette. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 1058-1066, 2017.
DOI
[15]
Deshpande, A.; Lu, J. J.; Yeh, M. C.; Chong, M. J.; Forsyth, D. Learning diverse image colorization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2877-2885, 2017.
DOI
[16]
Royer, A.; Kolesnikov, A.; Lampert, C. Probabilistic image colorization. In: Proceedings of the British Machine Vision Conference, 85.1-85.12, 2017.
DOI
[17]
Lin, T. Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C. L. Microsoft COCO: Common objects in context. In: Computer Vision-ECCV 2014. Lecture Notes in Computer Science, Vol. 8693. Fleet, D.; Pajdla, T.; Schiele, B.; Tuytelaars, T. Eds. Springer Cham, 740-755, 2014.
DOI
[18]
Phan, H. Q.; Fu, H. B.; Chan, A. B. Color orchestra: Ordering color palettes for interpolation and prediction. IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 6, 1942-1955, 2018.
[19]
Aharoni-Mack, E.; Shambik, Y.; Lischinski, D. Pigment-based recoloring of watercolor paintings. In: Proceedings of the Symposium on Non-Photorealistic Animation and Rendering, Article No. 1, 2017.
DOI
[20]
Huang, Y. F.; Wang, C. B.; Li, C. H. Translucent image recoloring through homography estimation. Computer Graphics Forum Vol. 37, No. 7, 421-432, 2018.
[21]
O’Donovan, P.; Agarwala, A.; Hertzmann, A. Color compatibility from large datasets. ACM Transactions on Graphics Vol. 30, No. 4, Article No. 63, 2011.
[22]
Lin, S.; Hanrahan, P. Modeling how people extract color themes from images. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 3101-3110, 2013.
DOI
[23]
Wang, B. Y.; Yu, Y. Z.; Wong, T. T.; Chen, C.; Xu, Y. Q. Data-driven image color theme enhancement. ACM Transactions on Graphics Vol. 29, No. 6, Article No. 146, 2010.
[24]
Bahng, H.; Yoo, S.; Cho, W.; Park, D. K.; Wu, Z. M.; Ma, X. J.; Choo, J. Coloring with words: Guiding image colorization through text-based palette generation. In: Computer Vision-ECCV 2018. Lecture Notes in Computer Science, Vol. 11216. Ferrari, V.; Hebert, M.; Sminchisescu, C.; Weiss, Y. Eds. Springer Cham, 443-459, 2018.
DOI
[25]
Reinhard, E.; Adhikhmin, M.; Gooch, B.; Shirley, P. Color transfer between images. IEEE Computer Graphics and Applications Vol. 21, No. 5, 34-41, 2001.
[26]
Faridul, H. S.; Pouli, T.; Chamaret, C.; Stauder, J.; Reinhard, E.; Kuzovkin, D.; Tremeau, A. Colour mapping: A review of recent methods, extensions and applications. Computer Graphics Forum Vol. 35, No. 1, 59-88, 2016.
[27]
Welsh, T.; Ashikhmin, M.; Mueller, K. Transferring color to greyscale images. ACM Transactions on Graphics Vol. 21, No. 3, 277-280, 2002.
[28]
Irony, R.; Cohen-Or, D.; Lischinski, D. Colorization by example. In: Proceedings of the 16th Eurographics Conference on Rendering Techniques, 201-210, 2005.
[29]
Gupta, R. K.; Chia, A. Y. S.; Rajan, D.; Ng, E. S.; Huang, Z. Y. Image colorization using similar images. In: Proceedings of the 20th ACM International Conference on Multimedia, 369-378, 2012.
DOI
[30]
Iizuka, S.; Simo-Serra, E.; Ishikawa, H. Let there be color!: Joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification. ACM Transactions on Graphics Vol. 35, No. 4, Article No. 110, 2016.
[31]
Ma, S.; Fu, J. L.; Chen, C. W.; Mei, T. DA-GAN: Instance-level image translation by deep attention generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5657-5666, 2018.
DOI
[32]
Su, J. W.; Chu, H. K.; Huang, J. B. Instance-aware image colorization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7965-7974, 2020.
[33]
Liang, L. Y.; Jin, L. W. Facial skin beautification using region-aware mask. In: Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, 2922-2926, 2013.
DOI
[34]
Chuang, Y. Y.; Curless, B.; Salesin, D. H.; Szeliski, R. A Bayesian approach to digital matting. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, II, 2001.
[35]
Chen, Q. F.; Li, D.; Tang, C. K. KNN matting. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 35, No. 9, 2175-2188, 2013.
[36]
Levin, A.; Lischinski, D.; Weiss, Y. A closed-form solution to natural image matting. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 30, No. 2, 228-242, 2008.
[37]
Sun, J.; Jia, J.; Tang, C.-K.; Shum, H.-Y. Poisson matting. ACM Transactions on Graphics Vol. 23, No. 3, 315-321, 2004.
[38]
Shen, X. Y.; Tao, X.; Gao, H. Y.; Zhou, C.; Jia, J. Y. Deep automatic portrait matting. In: Computer Vision-ECCV 2016. Lecture Notes in Computer Science, Vol. 9905. Leibe, B.; Matas, J.; Sebe, N.; Welling, M. Eds. Springer Cham, 92-107, 2016.
DOI
[39]
Xu, N.; Price, B.; Cohen, S.; Huang, T. Deep image matting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2970-2979, 2017.
DOI
[40]
Wu, X.; Fang, X. N.; Chen, T.; Zhang, F. L. JMNet: A joint matting network for automatic human matting. Computational Visual Media Vol. 6, No. 2, 215-224, 2020.
[41]
Lu, H.; Dai, Y. T.; Shen, C. H.; Xu, S. C. Indices matter: Learning to index for deep image matting. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 3265-3274, 2019.
DOI
[42]
Levin, A.; Lischinski, D.; Weiss, Y. Colorization using optimization. ACM Transactions on Graphics Vol. 23, No. 3, 689-694, 2004.
[43]
Luan, Q.; Wen, F.; Cohen-Or, D.; Liang, L.; Xu, Y.-Q.; Shum, H.-Y. Natural image colorization. In: Proceedings of the 18th Eurographics Conference on Rendering Techniques, 309-320, 2007.
[44]
Xu, L.; Yan, Q.; Jia, J. Y. A sparse control model for image and video editing. ACM Transactions on Graphics Vol. 32, No. 6, Article No. 197, 2013.
[45]
Zhang, R., Zhu, J. Y.; Isola, P., Geng, X. Y.; Lin, A. S., Yu, T. H.; Efros, A. A. Real-time user-guided image colorization with learned deep priors. ACM Transactions on Graphics Vol. 36, No. 4, Article No. 119, 2017.
[46]
An, X. B.; Pellacini, F. User-controllable color transfer. Computer Graphics Forum Vol. 29, No. 2, 263-271, 2010.
[47]
Lu, S. P.; Dauphin, G.; Lafruit, G.; Munteanu, A. Color retargeting: Interactive time-varying color image composition from time-lapse sequences. Computational Visual Media Vol. 1, No. 4, 321-330, 2015.
[48]
Lu, S. P.; Ceulemans, B.; Munteanu, A.; Schelkens, P. Spatio-temporally consistent color and structure optimization for multiview video color correction. IEEE Transactions on Multimedia Vol. 17, No. 5, 577-590, 2015.
[49]
Lu, S. P.; Mu, T. J.; Zhang, S. H. A survey on multiview video synthesis and editing. Tsinghua Science and Technology Vol. 21, No. 6, 678-695, 2016.
[50]
Ye, S. Q.; Lu, S. P.; Munteanu, A. Color correction for large-baseline multiview video. Signal Processing: Image Communication Vol. 53, 40-50, 2017.
[51]
Lu, S. P.; Li, S. M.; Wang, R.; Lafruit, G.; Cheng, M. M.; Munteanu, A. Low-rank constrained super-resolution for mixed-resolution multiview video. IEEE Transactions on Image Processing Vol. 30, 1072-1085, 2021.
[52]
Li, B.; Lai, Y. K.; Rosin, P. L. Sparse graph regularized mesh color edit propagation. IEEE Transactions on Image Processing Vol. 29, 5408-5419, 2020.
[53]
Cui, M. Y.; Lu, S. P.; Wang, M.; Yang, Y. L.; Lai, Y. K.; Rosin, P. L. 3D computational modeling and perceptual analysis of kinetic depth effects. Computational Visual Media Vol. 6, No. 3, 265-277, 2020.
[54]
Hu, Z. Y.; Jia, J.; Liu, B.; Bu, Y. H.; Fu, J. L. Aesthetic-aware image style transfer. In: Proceedings of the 28th ACM International Conference, 3320-3329, 2020.
[55]
Li, Z. Q.; Zha, Z. J.; Cao, Y. Deep palette-based color decomposition for image recoloring with aesthetic suggestion. In: MultiMedia Modeling. Lecture Notes in Computer Science, Vol. 11961. Ro, Y. et al. Eds. Springer Cham, 127-138, 2020.
[56]
Sun, X. H.; Qin, J. X. Deep learning-based creative intention understanding and color suggestions for illustration. In: Advances in Artificial Intelligence, Software and Systems Engineering. Advances in Intelligent Systems and Computing, Vol. 1213. Ahram, T. Eds. Springer Cham, 90-96, 2021.
DOI
[57]
He, K. M.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, 2980-2988, 2017.
[58]
Chen, K.; Wang, J. Q.; Pang, J. M.; Cao, Y. H.; Xiong, Y.; Li, X. X.; Sun, S.; Feng, W.; Liu, Z.; Xu, J. et al. MMDetection: Open MMLab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155, 2019.
[59]
Luo, M. R.; Cui, G.; Rigg, B. The development of the CIE 2000 colour-difference formula: CIEDE2000. Color Research & Application Vol .26, No. 5, 340-350, 2001.
[60]
Schubert, E.; Sander, J.; Ester, M.; Kriegel, H. P.; Xu, X. W. DBSCAN revisited, revisited. ACM Transactions on Database Systems Vol. 42, No. 3, Article No. 19, 2017.
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 28 March 2021
Accepted: 28 May 2021
Published: 06 December 2021
Issue date: June 2022

Copyright

© The Author(s) 2021.

Acknowledgements

This work was supported by National Natural Science Foundation of China (Grant Nos. 61972216 and 62111530097) and NSF of Tianjin City (Grant Nos. 18JCYBJC41300 and 18ZXZNGX00110).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www. editorialmanager.com/cvmj.

Return