Journal Home > Volume 28 , Issue 3

In the field of single remote sensing image Super-Resolution (SR), deep Convolutional Neural Networks (CNNs) have achieved top performance. To further enhance convolutional module performance in processing remote sensing images, we construct an efficient residual feature calibration block to generate expressive features. After harvesting residual features, we first divide them into two parts along the channel dimension. One part flows to the Self-Calibrated Convolution (SCC) to be further refined, and the other part is rescaled by the proposed Two-Path Channel Attention (TPCA) mechanism. SCC corrects local features according to their expressions under the deep receptive field, so that the features can be refined without increasing the number of calculations. The proposed TPCA uses the means and variances of feature maps to obtain accurate channel attention vectors. Moreover, a region-level nonlocal operation is introduced to capture long-distance spatial contextual information by exploring pixel dependencies at the region level. Extensive experiments demonstrate that the proposed residual feature calibration network is superior to other SR methods in terms of quantitative metrics and visual quality.


menu
Abstract
Full text
Outline
About this article

RFCNet: Remote Sensing Image Super-Resolution Using Residual Feature Calibration Network

Show Author's information Yuan Xue1Liangliang Li2Zheyuan Wang1Chenchen Jiang1Minqin Liu1Jiawen Wang3Kaipeng Sun3Hongbing Ma2 ( )
College of Information Science and Engineering, Xinjiang University, Urumqi 830046, China
Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
Shanghai Institute of Satellite Engineering, Shanghai 201109, China

Abstract

In the field of single remote sensing image Super-Resolution (SR), deep Convolutional Neural Networks (CNNs) have achieved top performance. To further enhance convolutional module performance in processing remote sensing images, we construct an efficient residual feature calibration block to generate expressive features. After harvesting residual features, we first divide them into two parts along the channel dimension. One part flows to the Self-Calibrated Convolution (SCC) to be further refined, and the other part is rescaled by the proposed Two-Path Channel Attention (TPCA) mechanism. SCC corrects local features according to their expressions under the deep receptive field, so that the features can be refined without increasing the number of calculations. The proposed TPCA uses the means and variances of feature maps to obtain accurate channel attention vectors. Moreover, a region-level nonlocal operation is introduced to capture long-distance spatial contextual information by exploring pixel dependencies at the region level. Extensive experiments demonstrate that the proposed residual feature calibration network is superior to other SR methods in terms of quantitative metrics and visual quality.

Keywords: Convolutional Neural Network (CNN), Super-Resolution (SR), attention mechanism, remote sensing image

References(30)

[1]
D. Moroni, G. Pieri, O. Salvetti, and M. Tampucci, Proactive marine information system for environmental monitoring, in Proc. OCEANS 2015 -Genova, Genova, Italy, 2015, pp. 1–5.
[2]
R. Booysen, R. Gloaguen, S. Lorenz, R. Zimmermann, L. Andreani, and P. A. M. Nex, The potential of multi-sensor remote sensing mineral exploration: Examples from southern Africa, in Proc. 2019 IEEE Int. Geoscience and Remote Sensing Symp., Yokohama, Japan, 2019, pp. 6027–6030.
[3]
X. L. Wang, Y. Ban, H. M. Guo, and L. Hong, Deep learning model for target detection in remote sensing images fusing multilevel features, in Proc. 2019 IEEE Int. Geoscience and Remote Sensing Symp., Yokohama, Japan, 2019, pp. 250–253.
[4]
L. Zhang and X. L. Wu, An edge-guided image interpolation algorithm via directional filtering and data fusion, IEEE Trans. Image Proc., vol. 15, no. 8, pp. 2226–2238, 2006.
[5]
K. B. Zhang, X. B. Gao, D. C. Tao, and X. L. Li, Single image super-resolution with non-local means and steering kernel regression, IEEE Trans. Image Proc., vol. 21, no. 11, pp. 4544–4556, 2012.
[6]
R. Timofte, V. De, and L. Van Gool, Anchored neighborhood regression for fast example-based super-resolution, in Proc. 2013 IEEE Int. Conf. on Computer Vision, Sydney, Australia, 2013, pp. 1920–1927.
[7]
T. Peleg and M. Elad, A statistical prediction model based on sparse representations for single image super-resolution, IEEE Trans. Image Proc., vol. 23, no. 6, pp. 2569–2582, 2014.
[8]
Z. X. Pan, W. Ma, J. Y. Guo, and B. Lei, Super-resolution of single remote sensing image based on residual dense backprojection networks, IEEE Trans. Geosci. Remote Sens., vol. 57, no. 10, pp. 7918–7933, 2019.
[9]
Z. Wang, L. Li, Y. Xue, C. Jiang, J. Wang, K. Sun, and H. Ma, FeNet: Feature enhancement network for lightweight remote-sensing image super-resolution, IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–12, 2022.
[10]
C. Dong, C. C. Loy, K. M. He, and X. O. Tang, Image super-resolution using deep convolutional networks, IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 2, pp. 295–307, 2016.
[11]
W. Z. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. H. Wang, Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network, in Proc. 2016 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 1874–1883.
[12]
J. Kim, J. K. Lee, and K. M. Lee, Accurate image super-resolution using very deep convolutional networks, in Proc. 2016 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 1646–1654.
[13]
D. D. Guo, S. Y. Zhu, and J. A. Wei, Research on vehicle identification based on high resolution satellite remote sensing image, in Proc. 2019 Int. Conf. on Intelligent Transportation, Big Data & Smart City (ICITBS), Changsha, China, 2019, pp. 62–65.
[14]
S. Lei, Z. W. Shi, and Z. X. Zou, Super-resolution for remote sensing images via local-global combined network, IEEE Geosci. Remote Sens. Lett., vol. 14, no. 8, pp. 1243–1247, 2017.
[15]
D. Y. Zhang, J. Shao, X. Y. Li, and H. T. Shen, Remote sensing image super-resolution via mixed high-order attention network, IEEE Trans. Geosci. Remote Sens., vol. 59, no. 6, pp. 5183–5196, 2021.
[16]
R. Sustika, A. B. Suksmono, D. Danudirdjo, and K. Wikantika, Generative adversarial network with residual dense generator for remote sensing image super resolution, in Proc. 2020 Int. Conf. on Radar, Antenna, Microwave, Electronics, and Telecommunications (ICRAMET), Tangerang, Indonesia, 2020, pp. 34–39.
[17]
H. P. Zhang, P. R. Wang, and Z. G. Jiang, Nonpairwise-trained cycle convolutional neural network for single remote sensing image super-resolution, IEEE Trans. Geosci. Remote Sens., vol. 59, no. 5, pp. 4250–4261, 2021.
[18]
X. Y. Dong, X. Sun, X. P. Jia, Z. H. Xi, L. R. Gao, and B. Zhang, Remote sensing image super-resolution using novel dense-sampling networks, IEEE Trans. Geosci. Remote Sens., vol. 59, no. 2, pp. 1618–1633, 2021.
[19]
B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, Enhanced deep residual networks for single image super-resolution, in Proc. 2017 IEEE Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 2017, pp. 1132–1140.
[20]
Y. L. Zhang, Y. P. Tian, Y. Kong, B. N. Zhong, and Y. Fu, Residual dense network for image super-resolution, in Proc. 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 2472–2481.
[21]
Y. L. Zhang, K. P. Li, K. Li, L. C. Wang, B. N. Zhong, and Y. Fu, Image super-resolution using very deep residual channel attention networks, in Proc. 15th European Conf. on Computer Vision, Munich, Germany, 2018, pp. 294–310.
[22]
Y. Qiu, R. Wang, D. Tao, and J. Cheng, Embedded block residual network: A recursive restoration model for single-image super-resolution, in Proceedings of Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 4180–4189.
[23]
J. Liu, W. J. Zhang, Y. T. Tang, J. Tang, and G. S. Wu, Residual feature aggregation network for image super-resolution, in Proc. 2020 IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2020, pp. 2356–2365.
[24]
T. Dai, J. R. Cai, Y. B. Zhang, S. T. Xia, and L. Zhang, Second-order attention network for single image super-resolution, in Proc. 2019 IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 11057–11066.
[25]
D. X. Dai and W. Yang, Satellite image classification via two-layer sparse coding with biased image representation, IEEE Geosci. Remote Sens. Lett., vol. 8, no. 1, pp. 173–176, 2011.
[26]
Q. Zou, L. H. Ni, T. Zhang, and Q. Wang, Deep learning based feature selection for remote sensing scene classification, IEEE Geosci. Remote Sens. Lett., vol. 12, no. 11, pp. 2321–2325, 2015.
[27]
A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. M. Lin, A. Desmaison, L. Antiga, and A. Lerer, Automatic differentiation in PyTorch, presented at the 31st Conf. on Neural Information Processing Systems, Long Beach, CA, USA, 2017.
[28]
C. Dong, C. C. Loy, and X. O. Tang, Accelerating the super-resolution convolutional neural network, in Proc. 14th European Conf. on Computer Vision, Amsterdam, The Netherlands, 2016, pp. 391–407.
[29]
Z. Hui, X. M. Wang, and X. B. Gao, Fast and accurate single image super-resolution via information distillation network, in Proc. 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 723–731.
[30]
J. Liu, J. Tang, and G. S. Wu, Residual feature distillation network for lightweight image super-resolution, in Proc. European Conf. on Computer Vision, Glasgow, UK, 2020, pp. 41–55.
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 13 January 2022
Revised: 05 April 2022
Accepted: 24 May 2022
Published: 13 December 2022
Issue date: June 2023

Copyright

© The author(s) 2023.

Acknowledgements

This work was supported by Shanghai Aerospace Science and Technology Innovation Fund (No. SAST2019-048) and the Cross-Media Intelligent Technology Project of Beijing National Research Center for Information Science and Technology (BNRist) (No. BNR2019TD01022).

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return