Journal Home > Volume 25 , Issue 4

Traditional steganography is the practice of embedding a secret message into an image by modifying the information in the spatial or frequency domain of the cover image. Although this method has a large embedding capacity, it inevitably leaves traces of rewriting that can eventually be discovered by the enemy. The method of Steganography by Cover Synthesis (SCS) attempts to construct a natural stego image, so that the cover image is not modified; thus, it can overcome detection by a steganographic analyzer. Due to the difficulty in constructing natural stego images, the development of SCS is limited. In this paper, a novel generative SCS method based on a Generative Adversarial Network (GAN) for image steganography is proposed. In our method, we design a GAN model called Synthetic Semantics Stego Generative Adversarial Network (SSS-GAN) to generate stego images from secret messages. By establishing a mapping relationship between secret messages and semantic category information, category labels can generate pseudo-real images via the generative model. Then, the receiver can recognize the labels via the classifier network to restore the concealed information in communications. We trained the model on the MINIST, CIFAR-10, and CIFAR-100 image datasets. Experiments show the feasibility of this method. The security, capacity, and robustness of the method are analyzed.


menu
Abstract
Full text
Outline
About this article

A Generative Method for Steganography by Cover Synthesis with Auxiliary Semantics

Show Author's information Zhuo ZhangGuangyuan FuRongrong NiJia Liu( )Xiaoyuan Yang
Rocket Force University of Engineering, Xi’an 710025, China.
Key Laboratory of Network and Information Security of PAP, Engineering University of PAP, Xi’an 710086, China.
Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China.
Key Laboratory of Network and Information Security of PAP, Engineering University of PAP, Xi’an 710086, China.

Abstract

Traditional steganography is the practice of embedding a secret message into an image by modifying the information in the spatial or frequency domain of the cover image. Although this method has a large embedding capacity, it inevitably leaves traces of rewriting that can eventually be discovered by the enemy. The method of Steganography by Cover Synthesis (SCS) attempts to construct a natural stego image, so that the cover image is not modified; thus, it can overcome detection by a steganographic analyzer. Due to the difficulty in constructing natural stego images, the development of SCS is limited. In this paper, a novel generative SCS method based on a Generative Adversarial Network (GAN) for image steganography is proposed. In our method, we design a GAN model called Synthetic Semantics Stego Generative Adversarial Network (SSS-GAN) to generate stego images from secret messages. By establishing a mapping relationship between secret messages and semantic category information, category labels can generate pseudo-real images via the generative model. Then, the receiver can recognize the labels via the classifier network to restore the concealed information in communications. We trained the model on the MINIST, CIFAR-10, and CIFAR-100 image datasets. Experiments show the feasibility of this method. The security, capacity, and robustness of the method are analyzed.

Keywords: information hiding, steganography, steganography without modification, Steganography by Cover Synthesis (SCS), generative adversarial networks

References(32)

[1]
T. Pevný, T. Filler, and P. Bas, Using high-dimensional image models to perform highly undetectable steganography, in Proc. International Workshop on Information Hiding, Calgary, Canada, 2010, pp. 161-177.
DOI
[2]
V. Holub and J. Fridrich, Designing steganographic distortion using directional filters, in Proc. IEEE International Workshop on Information Forensics and Security, Tenerife, Spain, 2012, pp. 234-239.
DOI
[3]
V. Holub, J. Fridrich, and T. Denemark, Universal distortion function for steganography in an arbitrary domain, EURASIP Journal on Information Security, vol. 2014, pp. 1-13, 2014.
[4]
K. Niu, X. Yang, and Y. Zhang, A novel video reversible data hiding algorithm using motion vector for H.264/AVC, Tsinghua Science and Technology, vol 22, no. 5, pp. 489-498, 2017.
[5]
Y. Zhang, M. Zhang, X. Yang, D. Guo, and L. Liu, Novel video steganography algorithm based on secret sharing and error-correcting code for H.264/AVC, Tsinghua Science and Technology, vol 22, no. 2, pp. 198-209, 2017.
[6]
J. Zeng, S. Tan, B. Li, and J. Huang, Large-scale JPEG steganalysis using hybrid deep-learning framework, IEEE Transactions on Information Forensics and Security, vol 13, no. 5, pp. 1200-1214, 2016.
[7]
G. Xu, Deep convolutional neural network to detect J-UNIWARD, in Proc. The 5th ACM Workshop on Information Hiding and Multimedia Security, Philadelphia, PA, USA, 2017, pp. 67-73.
DOI
[8]
J. Ye, J. Ni, and Y. Yi, Deep learning hierarchical representations for image steganalysis, IEEE Transactions on Information Forensics and Security, vol 12, no. 11, pp. 2545-2557, 2017.
[9]
J. Fridrich, Steganography in Digital Media: Principles, Algorithms, and Applications. Cambridge, UK: Cambridge University Press, 2010.
DOI
[10]
Z. Zhou, H. Sun, R. Harit, X. Chen, and X. Sun, Coverless image steganography without embedding, in Proc. International Conference on Cloud Computing and Security, Nanjing, China, 2015, pp. 123-132.
DOI
[11]
Z. Zhou, Y. Cao, and X. Sun, Coverless information hiding based on bag-of-words model of image, (in Chinese), Journal of Applied Sciences, vol. 34, no. 5, pp. 527-536, 2016.
[12]
S. Zheng, L. Wang, B. Ling, and D. Hu, Coverless information hiding based on robust image hashing, in Proc. International Conference on Intelligent Computing, Liverpool, UK, 2017, pp. 536-547.
DOI
[13]
J. Xu, X. Mao, X. Jin, A. Jaffer, S. Lu, L. Li, and M. Toyoura, Hidden message in a deformation-based texture, The Visual Computer, vol. 31, no. 12, pp. 1653-1669, 2015.
[14]
K. Wu and C. Wang, Steganography using reversible texture synthesis, IEEE Trans. Image Process., vol. 24, no. 1, pp. 130-139, 2015.
[15]
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, Generative adversarial nets, in Proc. Neural Information Processing Systems 2014, Montreal, Canada, 2014, pp. 2672-2680.
[16]
C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al., Photo-realistic single image super-resolution using a generative adversarial network, in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 4681-4690.
DOI
[17]
S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, Generative adversarial text to image synthesis, https://arxiv.org/abs/1605.05396, 2016.
[18]
D. Volkhonskiy, I. Nazarov, B. Borisenko, and E. Burnaev, Steganographic generative adversarial networks, https://arxiv.org/abs/1703.05502, 2017.
[19]
W. Tang, S. Tan, B. Li, and J. Huang, Automatic steganographic distortion learning using a generative adversarial network, IEEE Signal Processing Letters, vol. 24, no. 10, pp. 1547-1551, 2017.
[20]
J. Yang, K. Liu, X. Kang, E. Wong, and Y. shi, Spatial Image Steganography based on generative adversarial network, https://arxiv.org/abs/1804.07939v1, 2018.
[21]
J. Hayes and G. Danezis, Generating steganographic images via adversarial training, in Proc. Neural Information Processing Systems 2017, Long Beach, CA, USA, 2017, pp. 2672-2680.
[22]
D. Hu, L. Wang, W. Jiang, S. Zheng, and B. Li, A novel image steganography method via deep convolutional generative adversarial networks, IEEE Access, vol. 6, pp. 38303-38314, 2018.
[23]
J. Liu, T. Zhou, Z. Zhang, Y. Ke, Y. Lei, M. Zhang, and X. Yang, Digital cardan grille: A modern approach for information hiding, https://arxiv.org/abs/1803.09219, 2018.
DOI
[24]
T. Filler, J. Judas, and J. Fridrich, Minimizing embedding impact in steganography using trellis-coded quantization, in Proc. Media Forensics and Security II, San Jose, CA, USA, 2010, p. 754105.
DOI
[25]
T. Fang, M. Jaggi, and K. Argyraki, Generating steganographic text with LSTMs, https://arxiv.org/abs/1705.10742v1, 2017.
DOI
[26]
Z. Yang, X. Du, Y. Tan, Y. Huang, and Y. Zhang, Aag-stega: Automatic audio generation-based steganography, https://arxiv.org/abs/1809.03463, 2018.
[27]
The MNIST database of handwritten digits, http://yann.lecun.com/exdb/mnist/, 2019.
[28]
The CIFAR-10 and CIFAR-100 dataset, http://www.cs.toronto.edu/kriz/CIFAR.html, 2019.
[29]
A. Odena, C. Olah, and J. Shlens, Conditional image synthesis with auxiliary classifier GANs, in Proc. the 34-th International Conference on Machine Learning, Sydney, Australia, 2017, pp. 2642-2651.
[30]
X. Glorot and Y. Bengio, Understanding the difficulty of training deep feedforward neural networks, in Proc. the Thirteenth International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, 2010, pp. 249-256.
[31]
M. Goljan, J. Fridrich, and R. Cogranne, Rich model for steganalysis of color images, in Proc. 2014 IEEE International Workshop on Information Forensics and Security (WIFS), Atlanta, GA, USA, 2014, pp. 185-190.
DOI
[32]
H. Otori and S. Kuriyama, Texture synthesis for mobile data communications, IEEE Computer Graphics and Applications, vol. 29, no. 6, pp. 74-81, 2009.
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 11 April 2019
Revised: 03 June 2019
Accepted: 05 June 2019
Published: 13 January 2020
Issue date: August 2020

Copyright

© The author(s) 2020

Acknowledgements

This work was supported by the National Natural Science Foundation of China (NSFC) (Nos. 61872384 and 61672090)

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return