Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Most existing image inpainting methods aim to fill in the missing content in the inside-hole region of the target image. However, the areas to be restored in realistically degraded images are unspecified. Previous studies have failed to recover the degradations due to the absence of the explicit mask indication. Meanwhile, inconsistent patterns are blended complexly with the image content. Therefore, estimating whether certain pixels are out of distribution and considering whether the object is consistent with the context is necessary. Motivated by these observations, a two-stage blind image inpainting network, which utilizes global semantic features of the image to locate semantically inconsistent regions and then generates reasonable content in the areas, is proposed. Specifically, the representation differences between inconsistent and available content are first amplified, iteratively predicting the region to be restored from coarse to fine. A confidence-driven inpainting network based on prediction masks is then used to estimate the information regarding missing regions. Furthermore, a multiscale contextual aggregation module is introduced for spatial feature transfer to refine the generated contents. Extensive experiments over multiple datasets demonstrate that the proposed method can generate visually plausible and structurally complete results that are particularly effective in recovering diverse degraded images.
Z. Zou, P. Zhao, and X. Zhao, Automatic segmentation, inpainting, and classification of defective patterns on ancient architecture using multiple deep learning algorithms, Struct. Control. Health Monit., vol. 28, no. 7, p. e2742, 2021.
G. Tudavekar, S. R. Patil, and S. S. Saraf, Dual-tree complex wavelet transform and super-resolution based video inpainting application to object removal and error concealment, CAAI Trans. Intell. Technol., vol. 5, no. 4, pp. 314–319, 2020.
Y. Guo and H. Ma, Image blind deblurring using an adaptive patch prior, Tsinghua Science and Technology, vol. 24, no. 2, pp. 238–248, 2019.
N. Cai, Z. Su, Z. Lin, H. Wang, Z. Yang, and B. W. K. Ling, Blind inpainting using the fully convolutional neural network, Vis. Comput., vol. 33, no. 2, pp. 249–261, 2017.
X. Wu, K. Xu, and P. Hall, A survey of image synthesis and editing with generative adversarial networks, Tsinghua Science and Technology, vol. 22, no. 6, pp. 660–674, 2017.
C. Ballester, M. Bertalmio, V. Caselles, G. Sapiro, and J. Verdera, Filling-in by joint interpolation of vector fields and gray levels, IEEE Trans. Image Process., vol. 10, no. 8, pp. 1200–1211, 2001.
C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman, PatchMatch: A randomized correspondence algorithm for structural image editing, ACM Trans. Graph., vol. 28, no. 3, p. 24, 2009.
S. Darabi, E. Shechtman, C. Barnes, D. B. Goldman, and P. Sen, Image melding: Combining inconsistent images using patch-based synthesis, ACM Trans. Graph., vol. 31, no. 4, p. 82, 2012.
S. Iizuka, E. Simo-Serra, and H. Ishikawa, Globally and locally consistent image completion, ACM Trans. Graph., vol. 36, no. 4, p. 107, 2017.
X. Li, H. Zhang, L. Feng, J. Hu, R. Zhang, and Q. Qiao, Edge-aware image outpainting with attentional generative adversarial networks, IET Image Process., vol. 16, no. 7, pp. 1807–1821, 2022.
F. Wu, Y. Kong, W. Dong, and Y. Wu, Gradient-aware blind face inpainting for deep face verification, Neurocomputing, vol. 331, pp. 301–311, 2019.
M. Hu, J. Yang, N. Ling, Y. Liu, and J. Fan, Lightweight single image deraining algorithm incorporating visual saliency, IET Image Process., vol. 16, no. 12, pp. 3190–3200, 2022.
P. Li, M. Yun, J. Tian, Y. Tang, G. Wang, and C. Wu, Stacked dense networks for single-image snow removal, Neurocomputing, vol. 367, pp. 152–163, 2019.
X. Li, X. Li, X. Zhang, Y. Liu, J. Liang, Z. Guo, and K. Zhai, A method of inpainting moles and acne on the high-resolution face photos, IET Image Process., vol. 15, no. 3, pp. 833–844, 2021.
J. Xu, N. Wang, and Y. Wang, Multi-pyramid image spatial structure based on coarse-to-fine pyramid and scale space, CAAI Trans. Intell. Technol., vol. 3, no. 4, pp. 228–234, 2018.
Q. Hua, L. Chen, P. Li, S. Zhao, and Y. Li, A pixel-channel hybrid attention model for image processing, Tsinghua Science and Technology, vol. 27, no. 5, pp. 804–816, 2022.
C. Doersch, S. Singh, A. Gupta, J. Sivic, and A. A. Efros, What makes Paris look like Paris? ACM Trans. Graph., vol. 31, no. 4, p. 101, 2012.
B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba, Places: A 10 million image database for scene recognition, IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 6, pp. 1452–1464, 2018.
563
Views
70
Downloads
2
Crossref
3
Web of Science
3
Scopus
0
CSCD
Altmetrics
The articles published in this open access journal are distributed under the terms of theCreative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).