AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Regular Paper

Single Image Deraining Using Residual Channel Attention Networks

School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
Show Author Information

Abstract

Image deraining is a highly ill-posed problem. Although significant progress has been made due to the use of deep convolutional neural networks, this problem still remains challenging, especially for the details restoration and generalization to real rain images. In this paper, we propose a deep residual channel attention network (DeRCAN) for deraining. The channel attention mechanism is able to capture the inherent properties of the feature space and thus facilitates more accurate estimations of structures and details for image deraining. In addition, we further propose an unsupervised learning approach to better solve real rain images based on the proposed network. Extensive qualitative and quantitative evaluation results on both synthetic and real-world images demonstrate that the proposed DeRCAN performs favorably against state-of-the-art methods.

Electronic Supplementary Material

Download File(s)
JCST-2009-10979-Highlights.pdf (583.3 KB)

References

【1】
【1】
 
 
Journal of Computer Science and Technology
Pages 439-454

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Wang D, Pan J-S, Tang J-H. Single Image Deraining Using Residual Channel Attention Networks. Journal of Computer Science and Technology, 2023, 38(2): 439-454. https://doi.org/10.1007/s11390-022-0979-2

727

Views

8

Crossref

8

Web of Science

9

Scopus

0

CSCD

Received: 09 September 2020
Accepted: 23 September 2022
Published: 30 March 2023
© Institute of Computing Technology, Chinese Academy of Sciences 2023