AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (44.6 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Exploring contextual priors for real-world image super-resolution

Shixiang Wu1,2Chao Dong1,3Yu Qiao1,3( )
Guangdong–Hong Kong–Macao Joint Laboratory of Human–Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
University of Chinese Academy of Sciences, Beijing 100049, China
Shanghai AI Laboratory, Shanghai, China
Show Author Information

Abstract

Real-world blind image super-resolution is a challenging problem due to the absence of target high resolution images for training. Inspired by the recent success of the single image generation based method SinGAN, we tackle this challenging problem with a refined model SR-SinGAN, which can learn to perform single real image super-resolution. Firstly, we empirically find that downsampled LR input with an appropriate size can improve the robustness of the generation model. Secondly, we introduce a global contextual prior to provide semantic information. This helps to remove distorted pixels and improve the output fidelity. Finally, we design an image gradient based local contextual prior to guide detail generation. It can alleviate generated artifacts in smooth areas while preserving rich details in densely textured regions (e.g., hair, grass). To evaluate the effectiveness of these contextual priors, we conducted extensive experiments on both artificial and real images. Results show that these priors can stabilize training and preserve output fidelity, improving the generated image quality. We furthermore find that these single image generation based methods work better for images with repeated textures compared to general images.

Graphical Abstract

References

【1】
【1】
 
 
Computational Visual Media
Pages 159-177

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Wu S, Dong C, Qiao Y. Exploring contextual priors for real-world image super-resolution. Computational Visual Media, 2025, 11(1): 159-177. https://doi.org/10.26599/CVM.2025.9450303

987

Views

60

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Received: 06 April 2022
Accepted: 26 June 2022
Published: 28 February 2025
© The Author(s) 2025.

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

To submit a manuscript, please go to https://jcvm.org.