AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
View PDF
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Lightweight Super-Resolution Model for Complete Model Copyright Protection

Department of Computer Science, Georgia State University, Atlanta, GA 30303, USA
LSWare Inc., Seoul 08504, Republic of Korea
College of Intelligence Information Engineering, Sangmyung University, Republic of Korea, Seoul 03016
Show Author Information

Abstract

Deep learning based techniques are broadly used in various applications, which exhibit superior performance compared to traditional methods. One of the mainstream topics in computer vision is the image super-resolution task. In recent deep learning neural networks, the number of parameters in each convolution layer has increased along with more layers and feature maps, resulting in better image super-resolution performance. In today’s era, numerous service providers offer super-resolution services to users, providing them with remarkable convenience. However, the availability of open-source super-resolution services exposes service providers to the risk of copyright infringement, as the complete model could be vulnerable to leakage. Therefore, safeguarding the copyright of the complete model is a non-trivial concern. To tackle this issue, this paper presents a lightweight model as a substitute for the original complete model in image super-resolution. This research has identified smaller networks that can deliver impressive performance, while protecting the original model’s copyright. Finally, comprehensive experiments are conducted on multiple datasets to demonstrate the superiority of the proposed approach in generating super-resolution images even using lightweight neural networks.

References

[1]
K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556, 2014.
[2]
K. M. He, X. Y. Zhang, S. Q. Ren, and J. Sun, Deep residual learning for image recognition, in Proc. 2016 IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 770–778.
[3]

A. Krizhevsky, I. Sutskever, and G. E. Hinton, ImageNet classification with deep convolutional neural networks, Commun. ACM, vol. 60, no. 6, pp. 84–90, 2017.

[4]

I. K. Nti, J. A. Quarcoo, J. Aning, and G. K. Fosu, A mini-review of machine learning in big data analytics: Applications, challenges, and prospects, Big Data Mining and Analytics, vol. 5, no. 2, pp. 81–97, 2022.

[5]

K. N. Zhang, Z. Tian, Z. P. Cai, and D. Seo, Link-privacy preserving graph embedding data publication with adversarial learning, Tsinghua Science and Technology, vol. 27, no. 2, pp. 244–256, 2022.

[6]

K. Y. Li, L. Tian, X. Zheng, and B. Hui, Plausible heterogeneous graph k-anonymization for social networks, Tsinghua Science and Technology, vol. 27, no. 6, pp. 912–924, 2022.

[7]

Z. P. Cai, Z. B. He, X. Guan, and Y. S. Li, Collective data-sanitization for preventing sensitive information inference attacks in social networks, IEEE Trans. Dependable Secure Comput., vol. 15, no. 4, pp. 577–590, 2018.

[8]

Y. Huang, Y. J. Li, and Z. P. Cai, Security and privacy in metaverse: A comprehensive survey, Big Data Mining and Analytics, vol. 6, no. 2, pp. 234–247, 2023.

[9]

Z. P. Cai and X. Zheng, A private and efficient mechanism for data uploading in smart cyber-physical systems, IEEE Trans. Netw. Sci. Eng., vol. 7, no. 2, pp. 766–775, 2020.

[10]

Z. P. Cai, X. Zheng, J. B. Wang, and Z. B. He, Private data trading towards range counting queries in internet of things, IEEE Trans. Mob. Comput., vol. 22, no. 8, pp. 4881–4897, 2023.

[11]

X. Zheng and Z. P. Cai, Privacy-preserved data sharing towards multiple parties in industrial IoTs, IEEE J. Select. Areas Commun., vol. 38, no. 5, pp. 968–979, 2020.

[12]

Y. Liang, Z. P. Cai, J. G. Yu, Q. L. Han, and Y. S. Li, Deep learning based inference of private information using embedded sensors in smart devices, IEEE Netw., vol. 32, no. 4, pp. 8–14, 2018.

[13]

Z. H. Wang, J. Chen, and S. C. H. Hoi, Deep learning for image super-resolution: A survey, IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 10, pp. 3365–3387, 2021.

[14]

S. Anwar, S. Khan, and N. Barnes, A deep journey into super-resolution: A survey, ACM Comput. Surv., vol. 53, no. 3, p. 60, 2021.

[15]

A. Kappeler, S. Yoo, Q. Q. Dai, and A. K. Katsaggelos, Video super-resolution with convolutional neural networks, IEEE Trans. Comput. Imaging, vol. 2, no. 2, pp. 109–122, 2016.

[16]

L. P. Zhang, H. Y. Zhang, H. F. Shen, and P. X. Li, A super-resolution reconstruction algorithm for surveillance images, Signal Process., vol. 90, no. 3, pp. 848–859, 2010.

[17]
Y. W. Pang, J. L. Cao, J. Wang, and J. G. Han, JCS-Net: Joint classification and super-resolution network for small-scale pedestrian detection in surveillance images, IEEE Trans. Inform. Forensics Secur., vol. 14, no. 12, pp. 3322–3331, 2019.
DOI
[18]
P. Rasti, T. Uiboupin, S. Escalera, and G. Anbarjafari, Convolutional neural network super resolution for face recognition in surveillance monitoring, in Proc. 9 th Int. Conf. Articulated Motion and Deformable Objects, Palma de Mallorca, Spain, 2016, pp. 175–184.
DOI
[19]
A. Watson, Deep learning techniques for super-resolution in video games, arXiv preprint arXiv: 2012.09810, 2020.
[20]
E. Liu, DLSS 2.0-Image reconstruction for real-time rendering with deep learning, https://developer.nvidia.com/gtc/2020/video/s22698-vid, 2020.
[21]
J. J. Gu, H. M. Cai, C. Y. Dong, R. F. Zhang, Y. L. Zhang, W. M. Yang, and C. Yuan, Super-resolution by predicting offsets: An ultra-efficient super-resolution network for rasterized images, in Proc. 17 th European Conf. Computer Vision, Tel Aviv, Israel, 2022, pp. 583–598.
DOI
[22]
J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, and F. F. Li, ImageNet: A large-scale hierarchical image database, in Proc. 2009 IEEE Conf. Computer Vision and Pattern Recognition, Miami, FL, USA, 2009, pp. 248–255.
DOI
[23]

J. P. Gou, B. S. Yu, S. J. Maybank, and D. C. Tao, Knowledge distillation: A survey, Int. J. Comput. Vis., vol. 129, no. 6, pp. 1789–1819, 2021.

[24]
Y. Cheng, D. Wang, P. Zhou, and T. Zhang, A survey of model compression and acceleration for deep neural networks, arXiv preprint arXiv: 1710.09282, 2017.
[25]
X. Y. Zhang, X. Y. Zhou, M. X. Lin, and J. Sun, ShuffleNet: An extremely efficient convolutional neural network for mobile devices, in Proc. 2018 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 6848–6856.
DOI
[26]
H. Chang, D. Y. Yeung, and Y. M. Xiong, Super-resolution through neighbor embedding, in Proc. 2004 IEEE Computer Society Conf. Computer Vision and Pattern Recognition, Washington, DC, USA.
[27]
A. Lukin, A. S. Krylov, and A. Nasonov, Image interpolation by super-resolution, in Proc. 16 th Int. Conf. Computer Graphics and Vision GraphiCon’2006, Novosibirsk Akademgorodok, Russia, 2006, pp. 239−242.
[28]

C. Dong, C. C. Loy, K. M. He, and X. O. Tang, Image super-resolution using deep convolutional networks, IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 2, pp. 295–307, 2016.

[29]
C. Dong, C. C. Loy, and X. O. Tang, Accelerating the super-resolution convolutional neural network, in Proc. 14 th European Conf. Computer Vision, Amsterdam, the Netherlands, 2016, pp. 391–407.
DOI
[30]
C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. H. Wang, et al., Photo-realistic single image super-resolution using a generative adversarial network, in Proc. 2017 IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 4681–4690.
DOI
[31]
L. Beyer, O. J. Hénaff, A. Kolesnikov, X. H. Zhai, and A. van den Oord, Are we done with ImageNet, arXiv preprint arXiv: 2006.07159, 2020.
[32]
D. Tsipras, S. Santurkar, L. Engstrom, A. Ilyas, and A. Mądry, From imagenet to image classification: Contextualizing progress on benchmarks, in Proc. 37 th Int. Conf. Machine Learning, Vienna, Austria, 2020, p. 892.
[33]
M. Denil, B. Shakibi, L. Dinh, M. Ranzato, and N. de Freitas, Predicting parameters in deep learning, in Proc. 26 th Int. Conf. on Neural Information Processing Systems, Lake Tahoe, NV, USA, 2013, pp. 2148–2156.
[34]
A. G. Howard, M. L. Zhu, B. Chen, D. Kalenichenko, W. J. Wang, T. Weyand, M. Andreetto, and H. Adam, MobileNets: Efficient convolutional neural networks for mobile vision applications, arXiv preprint arXiv: 1704.04861, 2017.
[35]
F. Chollet, Xception: Deep learning with depthwise separable convolutions, in Proc. 2017 IEEE Conf. on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 1251–125.
DOI
[36]
L. Kaiser, A. N. Gomez, and F. Chollet, Depthwise separable convolutions for neural machine translation, arXiv preprint arXiv:1706.03059, 2017.
[37]
K. Han, Y. H. Wang, Q. Tian, J. Y. Guo, C. J. Xu, and C. Xu, GhostNet: More features from cheap operations, in Proc. 2020 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 1580–1589.
DOI
[38]

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, Generative adversarial networks, Commun. ACM, vol. 63, no. 11, pp. 139–144, 2020.

[39]
R. J. Wang, B. Jiang, C. Yang, Q. Li, and B. L. Zhang, MAGAN: Unsupervised low-light image enhancement guided by mixed-attention, Big Data Mining and Analytics, vol. 5, no. 2, pp. 110–119, 2022.
DOI
[40]

Z. P. Cai, Z. B. Xiong, H. H. Xu, P. Wang, W. Li, and Y. Pan, Generative adversarial networks: A survey toward private and secure applications, ACM Comput. Surv., vol. 54, no. 6, p. 132, 2022.

[41]
I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, Improved training of wasserstein GANs, in Proc. 31 st Int. Conf. Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 5769–5779.
[42]

S. Winkler and P. Mohandas, The evolution of video quality measurement: From PSNR to hybrid metrics, IEEE Trans. Broadcast., vol. 54, no. 3, pp. 660–668, 2008.

[43]

Q. Huynh-Thu and M. Ghanbari, Scope of validity of PSNR in image/video quality assessment, Electron. Lett., vol. 44, no. 13, pp. 800–801, 2008.

[44]

U. Sara, M. Akter, and M. S. Uddin, Image quality assessment through FSIM, SSIM, MSE and PSNR-a comparative study, J. Comput. Commun., vol. 7, no. 3, pp. 8–18, 2019.

Tsinghua Science and Technology
Pages 1194-1205
Cite this article:
Xie B, Xu H, Joe Y, et al. Lightweight Super-Resolution Model for Complete Model Copyright Protection. Tsinghua Science and Technology, 2024, 29(4): 1194-1205. https://doi.org/10.26599/TST.2023.9010082

373

Views

83

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 09 June 2023
Revised: 02 August 2023
Accepted: 04 August 2023
Published: 09 February 2024
© The Author(s) 2024.

The articles published in this open access journal are distributed under the terms of theCreative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return