Journal Home > Volume 4 , Issue 3

Recently, many steganalysis approaches improve their feature extraction ability through adding convolutional layers. However, it often leads to a decrease of resolution in the feature map during downsampling, which makes it challenging to extract weak steganographic signals accurately. To address this issue, this paper proposes a multi-resolution steganalysis net (MRS-Net). MRS-Net adopts a multi-resolution network to extract global image information, fusing the output feature map to ensure high-dimensional semantic information and supplementing low-level detail information. Furthermore, the model incorporates an attention module which can analyze image sensitivity based on different channel and spatial information, thus effectively focusing on areas with rich steganographic signals. Multiple benchmark experiments on the BOSSBase 1.01 dataset demonstrate that the accuracy of MRS-Net significantly improves by 9.9% and 3.3% compared with YeNet and SRNet, respectively, demonstrating its exceptional steganalysis capability.


menu
Abstract
Full text
Outline
About this article

Multi-resolution network based image steganalysis model

Show Author's information Zimiao Wang1Jinsong Wu2( )
School of Computer Science, Beijing University of Posts and Telecommunications, Beijing 100876, China
School of Artificial Intelligence, Guilin University of Electronic Technology, 510004, China, and also with Department of Electrical Engineering, University of Chile, Santiago 8370451, Chile

Abstract

Recently, many steganalysis approaches improve their feature extraction ability through adding convolutional layers. However, it often leads to a decrease of resolution in the feature map during downsampling, which makes it challenging to extract weak steganographic signals accurately. To address this issue, this paper proposes a multi-resolution steganalysis net (MRS-Net). MRS-Net adopts a multi-resolution network to extract global image information, fusing the output feature map to ensure high-dimensional semantic information and supplementing low-level detail information. Furthermore, the model incorporates an attention module which can analyze image sensitivity based on different channel and spatial information, thus effectively focusing on areas with rich steganographic signals. Multiple benchmark experiments on the BOSSBase 1.01 dataset demonstrate that the accuracy of MRS-Net significantly improves by 9.9% and 3.3% compared with YeNet and SRNet, respectively, demonstrating its exceptional steganalysis capability.

Keywords: multi-resolution, attention module, image steganalysis

References(18)

[1]

P. C. Mandal, I. Mukherjee, G. Paul, and B. N. Chatterji, Digital image steganography: A literature survey, Inf. Sci., vol. 609, pp. 1451–1488, 2022.

[2]

W. M. Eid, S. S. Alotaibi, H. M. Alqahtani, and S. Q. Saleh, Digital image steganalysis: Current methodologies and future challenges, IEEE Access, vol. 10, pp. 92321–92336, 2022.

[3]

J. Fridrich and J. Kodovsky, Rich models for steganalysis of digital images, IEEE Trans. Inf. Forensics Secur., vol. 7, no. 3, pp. 868–882, 2012.

[4]

T. S. Reinel, A. A. H. Brayan, B. O. M. Alejandro, M. R. Alejandro, A. G. Daniel, A. G. J. Alejandro, B. J. A. Buenaventura, O. A. Simon, I. Gustavo, and R. P. Raúl, GBRAS-Net: A convolutional neural network architecture for spatial image steganalysis, IEEE Access, vol. 9, pp. 14340–14350, 2021.

[5]

T. Muralidharan, A. Cohen, A. Cohen, and N. Nissim, The infinite race between steganography and steganalysis in images, Signal Process., vol. 201, p. 108711, 2022.

[6]
S. Tan and B. Li, Stacked convolutional auto-encoders for steganalysis of digital images, in Proc. 2014 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), Siem Reap, Cambodia, 2015, pp. 1–4.
DOI
[7]
Y. Qian, J. Dong, W. Wang, and T. Tan, Deep learning for steganalysis via convolutional neural networks, in Proc. 2015 IS&T/SPIE Electronic Imaging Symp., San Francisco, CA, USA, 2015, pp. 94090J-1–94090J-10.
DOI
[8]

G. Xu, H. Z. Wu, and Y. Q. Shi, Structural design of convolutional neural networks for steganalysis, IEEE Signal Process. Lett., vol. 23, no. 5, pp. 708–712, 2016.

[9]

J. Ye, J. Ni, and Y. Yi, Deep learning hierarchical representations for image steganalysis, IEEE Trans. Inf. Forensics Secur., vol. 12, no. 11, pp. 2545–2557, 2017.

[10]

M. Boroumand, M. Chen, and J. Fridrich, Deep residual network for steganalysis of digital images, IEEE Trans. Inf. Forensics Secur., vol. 14, no. 5, pp. 1181–1193, 2019.

[11]

R. Zhang, F. Zhu, J. Liu, and G. Liu, Depth-wise separable convolutions and multi-level pooling for an efficient spatial CNN-based steganalysis, IEEE Trans. Inf. Forensics Secur., vol. 15, pp. 1138–1150, 2020.

[12]

Q. Li, G. Feng, Y. Ren, and X. Zhang, Embedding probability guided network for image steganalysis, IEEE Signal Process. Lett., vol. 28, pp. 1095–1099, 2021.

[13]
S. Woo, J. Park, J. Y. Lee, and I. S. Kweon, CBAM: Convolutional block attention module, in Proc. 2018 European Conf. Computer Vision (ECCV), Munich, Germany, 2018, pp. 3–19.
DOI
[14]
G. Huang, D. Chen, T. Li, F. Wu, L. van der Maaten, and K. Q. Weinberger, Multi-scale dense convolutional networks for efficient prediction, arXiv preprint arXiv: 1703.09844, 2017.
[15]
K. Sun, B. Xiao, D. Liu, and J. Wang, Deep high-resolution representation learning for human pose estimation, in Proc. 2019 IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2020, pp. 5686–5696.
DOI
[16]
V. Holub and J. Fridrich, Designing steganographic distortion using directional filters, in Proc. 2012 IEEE Int. Workshop on Information Forensics and Security (WIFS), Costa Adeje, Spain, 2013, pp. 234–239.
DOI
[17]
V. Holub and J. Fridrich, Digital image steganography using universal distortion, in Proc. 1st ACM Workshop on Information Hiding and Multimedia Security. Montpellier, France, 2013, pp. 59–68.
DOI
[18]
V. Sedighi, R. Cogranne, and J. Fridrich, Content-adaptive steganography by minimizing statistical detectability, IEEE Trans. Inf. Forensics Secur., vol. 11, no. 2, pp. 221–234, 2016.
DOI
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 04 May 2023
Accepted: 16 May 2023
Published: 30 September 2023
Issue date: September 2023

Copyright

© All articles included in the journal are copyrighted to the ITU and TUP.

Acknowledgements

Acknowledgment

This paper was supported in part by the China Guangxi Science and Technology Plan Project (Guangxi Science and Technology Base and Talent Special Project) (No. 2022AC20001), Chile CONICYT FONDECYT Regular Project (No. 1181809), and the Chile CONICYT FONDEF Program (No. ID16I10466).

Rights and permissions

This work is available under the CC BY-NC-ND 3.0 IGO license:https://creativecommons.org/licenses/by-nc-nd/3.0/igo/

Return