Journal Home > Volume 8 , Issue 2

As image generation techniques mature, there is a growing interest in explainable representations that are easy to understand and intuitive to manipulate. In this work, we turn to co-occurrence statistics, which have long been used for texture analysis, to learn a controllable texture synthesis model. We propose a fully convolutional generative adversarial network, conditioned locally on co-occurrence statistics, to generate arbitrarily large images while having local, interpretable control over texture appearance. To encourage fidelity to the input condition, we introduce a novel differentiable co-occurrence loss that is integrated seamlessly into our framework in an end-to-end fashion. We demonstrate that our solution offers a stable, intuitive, and interpretable latent representation for texture synthesis, which can be used to generate smooth texture morphs between different textures. We further show an interactive texture tool that allows a user to adjust local characteristics of the synthesized texture by directly using the co-occurrence values.


menu
Abstract
Full text
Outline
Electronic supplementary material
About this article

Co-occurrence based texture synthesis

Show Author's information Anna Darzi1Itai Lang1Ashutosh Taklikar1Hadar Averbuch-Elor2( )Shai Avidan1
Tel Aviv University, Tel Aviv 6997801, Israel
Cornell-Tech, Cornell University, NYC, NY, 10044, USA

Abstract

As image generation techniques mature, there is a growing interest in explainable representations that are easy to understand and intuitive to manipulate. In this work, we turn to co-occurrence statistics, which have long been used for texture analysis, to learn a controllable texture synthesis model. We propose a fully convolutional generative adversarial network, conditioned locally on co-occurrence statistics, to generate arbitrarily large images while having local, interpretable control over texture appearance. To encourage fidelity to the input condition, we introduce a novel differentiable co-occurrence loss that is integrated seamlessly into our framework in an end-to-end fashion. We demonstrate that our solution offers a stable, intuitive, and interpretable latent representation for texture synthesis, which can be used to generate smooth texture morphs between different textures. We further show an interactive texture tool that allows a user to adjust local characteristics of the synthesized texture by directly using the co-occurrence values.

Keywords: deep learning, texture synthesis, co-occurrence, generative adversarial networks (GANs)

References(37)

[1]
Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
[2]
Bau, D.; Zhu, J. Y.; Strobelt, H.; Zhou, B. L.; Tenenbaum, J. B.; Freeman, W. T.; Torralba, A. Visualizing and understanding generative adversarial networks. arXiv preprint arXiv:1901.09887, 2019.
[3]
Shen, Y. J.; Gu, J. J.; Tang, X. O.; Zhou, B. L. Interpreting the latent space of GANs for semantic face editing. arXiv preprint arXiv:1907.10786, 2019.
[4]
Julesz, B. Visual pattern discrimination. IEEE Transactions on Information Theory Vol. 8, No. 2,84-92, 1962.
[5]
Haralick, R. M.; Shanmugam, K.; Dinstein, I. Textural features for image classification. IEEE Transactions on Systems, Man, and Cybernetics Vol. SMC-3, No. 6, 610-621, 1973.
[6]
Isola, P.; Zoran, D.; Krishnan, D.; Adelson, E. H. Crisp boundary detection using pointwise mutual information. In: Computer Vision-ECCV 2014. Lecture Notes in Computer Science, Vol. 8691. Fleet, D.; Pajdla, T.; Schiele, B.; Tuytelaars, T. Eds. Springer Cham,799-814, 2014.
DOI
[7]
Heeger, D. J.; Bergen, J. R. Pyramid-based texture analysis/synthesis. In: Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, 229-238, 1995.
DOI
[8]
De Bonet, J. S. Multiresolution sampling procedure for analysis and synthesis of texture images. In: Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, 361-368, 1997.
DOI
[9]
Portilla, J.; Simoncelli, E. P. A parametric texture model based on joint statistics of complex wavelet coefficients. International Journal of Computer Vision Vol. 40, No. 1, 49-70, 2000.
[10]
Efros, A. A.; Freeman, W. T. Image quilting for texture synthesis and transfer. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, 341-346, 2001.
DOI
[11]
Kwatra, V.; Schödl, A.; Essa, I.; Turk, G.; Bobick, A. Graphcut textures. ACM Transactions on GraphicsVol. 22, No. 3, 277-286, 2003.
[12]
Kwatra, V.; Essa, I.; Bobick, A.; Kwatra, N. Texture optimization for example-based synthesis. ACM Transactions on Graphics Vol. 24, No. 3, 795-802, 2005.
[13]
Wexler, Y.; Shechtman, E.; Irani, M. Space-time completion of video. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 29, No. 3,463-476, 2007.
[14]
Simakov, D.; Caspi, Y.; Shechtman, E.; Irani, M. Summarizing visual data using bidirectional similarity. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1-8, 2008.
DOI
[15]
Barnes, C.; Zhang, F. L. A survey of the state-of-the-art in patch-based synthesis. Computational Visual Media Vol. 3, No. 1, 3-20, 2017.
[16]
Matusik, W.; Zwicker, M.; Durand, F. Texture design using a simplicial complex of morphable textures. ACM Transactions on Graphics Vol. 24, No. 3, 787-794, 2005.
[17]
Rabin, J.; Peyré, G.; Delon, J.; Bernot, M. Wasserstein barycenter and its application to texture mixing. In: Scale Space and Variational Methods in Computer Vision. Lecture Notes in Computer Science, Vol. 6667. Bruckstein, A. M.; ter Haar Romeny, B. M.; Bronstein, A. M.; Bronstein, M. M. Eds. Springer Berlin Heidelberg, 435-446, 2012.
DOI
[18]
Darabi, S.; Shechtman, E.; Barnes, C.; Goldman, D. B.; Sen, P. Image melding: Combining inconsistent images using patch-based synthesis. ACM Transactions on Graphics Vol. 31, No. 4, Article No. 82, 2012.
[19]
Gatys, L. A.; Ecker, A. S.; Bethge, M. Texture synthesis using convolutional neural networks. In: Proceedings of the 28th International Conference on Neural Information Processing Systems, 262-270, 2015.
DOI
[20]
Ulyanov, D.; Lebedev, V.; Vedaldi, A.; Lempitsky, V. S. Texture networks: Feed-forward synthesis of textures and stylized images. In: Proceedings of the International Conference on Machine Learning, 1349-1357, 2016.
[21]
Sendik, O.; Cohen-Or, D. Deep correlations for texture synthesis. ACM Transactions on Graphics Vol. 36,No. 5, Article No. 161, 2017.
[22]
Li, C.; Wand, M. Precomputed real-time texture synthesis with Markovian generative adversarial networks. In: Computer Vision-ECCV 2016. Lecture Notes in Computer Science, Vol. 9907. Leibe, B.; Matas, J.; Sebe, N.; Welling, M. Eds. Springer Cham, 702-716, 2016.
DOI
[23]
Gang, L.; Gousseau, Y.; Xia, G. S. Texture synthesis through convolutional neural networks and spectrum constraints. In: Proceedings of the 23rd International Conference on Pattern Recognition, 3234-3239, 2016.
[24]
Li, Y. J.; Fang, C.; Yang, J. M.; Wang, Z. W.; Lu, X.; Yang, M. H. Diversified texture synthesiswith feed-forward networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 266-274, 2017.
[25]
Zhou, Y.; Zhu, Z.; Bai, X.; Lischinski, D.; Cohen-Or, D.; Huang, H. Non-stationary texture synthesis by adversarial expansion. ACM Transactions on Graphics Vol. 37, No. 4, Article No. 49, 2018.
[26]
Frühstück, A.; Alhashim, I.; Wonka, P. TileGAN: Synthesis of large-scale non-homogeneous textures. ACM Transactions on Graphics Vol. 38, No. 4, Article No. 58, 2019.
[27]
Jetchev, N.; Bergmann, U.; Vollgraf, R. Texture synthesis with spatial generative adversarial networks. In: Proceedings of the Workshop on Adversarial Training, 2016.
[28]
Bergmann, U.; Jetchev, N.; Vollgraf, R. Learning texture manifolds with the periodic spatial GAN. arXiv preprint arXiv:1705.06566, 2017.
[29]
Yu, N.; Barnes, C.; Shechtman, E.; Amirghodsi, S.; Lukác, M. Texture mixer: A network for controllable synthesis and interpolation of texture. arXiv preprint arXiv:1901.03447, 2019.
[30]
Yellott, J. I. Implications of triple correlation uniqueness for texture statistics and the Julesz conjecture. Journal of the Optical Society of America A Vol. 10, No. 5, 777-793, 1993.
[31]
Jevnisek, R. J.; Avidan, S. Co-occurrence filter. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3184-3192, 2017.
DOI
[32]
Kat, R.; Jevnisek, R.; Avidan, S. Matching pixels using co-occurrence statistics. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1751-1759, 2018.
DOI
[33]
Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A. C. Improved training of Wasserstein GANs. In: Proceedings of the Annual Conference on Neural Information Processing Systems, 2017.
[34]
Cimpoi, M.; Maji, S.; Kokkinos, I.; Mohamed, S.; Vedaldi, A. Describing textures in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3606-3613, 2014.
DOI
[35]
Bellini, R.; Kleiman, Y.; Cohen-Or, D. Time-varying weathering in texture space. ACM Transactions on Graphics Vol. 35, No. 4, Article No. 141, 2016.
[36]
Wu, F. Z.; Dong, W. M.; Kong, Y.; Mei, X.; Yan, D. M.; Zhang, X. P.; Paul, J.-C. Feature-aware natural texture synthesis. The Visual Computer Vol. 32, No. 1, 43-55, 2016.
[37]
Kaspar, A.; Neubert, B.; Lischinski, D.; Pauly, M.; Kopf, J. Self tuning texture optimization. Computer Graphics Forum Vol. 34, No. 2, 349-359, 2015.
File
101320TP-2022-2-289_ESM.pdf (10.9 MB)
Publication history
Copyright
Rights and permissions

Publication history

Received: 05 April 2021
Accepted: 28 May 2021
Published: 06 December 2021
Issue date: June 2022

Copyright

© The Author(s) 2021.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www. editorialmanager.com/cvmj.

Return