Journal Home > Volume 28 , Issue 4

Poor illumination greatly affects the quality of obtained images. In this paper, a novel convolutional neural network named DEANet is proposed on the basis of Retinex for low-light image enhancement. DEANet combines the frequency and content information of images and is divided into three subnetworks: decomposition, enhancement, and adjustment networks, which perform image decomposition; denoising, contrast enhancement, and detail preservation; and image adjustment and generation, respectively. The model is trained on the public LOL dataset, and the experimental results show that it outperforms the existing state-of-the-art methods regarding visual effects and image quality.


menu
Abstract
Full text
Outline
Electronic supplementary material
About this article

DEANet: Decomposition Enhancement and Adjustment Network for Low-Light Image Enhancement

Show Author's information Yonglong Jiang1Liangliang Li2Jiahe Zhu2Yuan Xue1Hongbing Ma2 ( )
College of Information Science and Engineering, Xinjiang University, Urumqi 830046, China
Department of Electronic Engineering, Tsinghua University, Beijing 100084, China

Abstract

Poor illumination greatly affects the quality of obtained images. In this paper, a novel convolutional neural network named DEANet is proposed on the basis of Retinex for low-light image enhancement. DEANet combines the frequency and content information of images and is divided into three subnetworks: decomposition, enhancement, and adjustment networks, which perform image decomposition; denoising, contrast enhancement, and detail preservation; and image adjustment and generation, respectively. The model is trained on the public LOL dataset, and the experimental results show that it outperforms the existing state-of-the-art methods regarding visual effects and image quality.

Keywords: low-light image enhancement, Retinex, image decomposition, image adjustment

References(28)

[1]
E. H. Land, The Retinex theory of color vision, Sci. Am., vol. 237, no. 6, pp. 108–128, 1977.
[2]
D. J. Jobson, Z. Rahman, and G. A. Woodell, Properties and performance of a center/surround Retinex, IEEE Trans. Image Process., vol. 6, no. 3, pp. 451–462, 1997.
[3]
D. J. Jobson, Z. Rahman, and G. A. Woodell, A multiscale Retinex for bridging the gap between color images and the human observation of scenes, IEEE Trans. Image Process., vol. 6, no. 7, pp. 965–976, 1997.
[4]
Z. Ying, G. Li, and W. Gao, A bio-inspired multi-exposure fusion framework for low-light image enhancement, arXiv preprint arXiv: 1711.00591, 2017.
[5]
X. Guo, Y. Li, and H. Ling, LIME: Low-light image enhancement via illumination map estimation, IEEE Trans. Image Process., vol. 26, no. 2, pp. 982–993, 2017.
[6]
K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering, IEEE Trans. Image Process., vol. 16, no. 8, pp. 2080–2095, 2007.
[7]
J. Zhong, B. Yang, G. Huang, F. Zhong, and Z. Chen, Remote sensing image fusion with convolutional neural network, Sens. Imag., vol. 17, no. 1, p. 10, 2016.
[8]
Z. Zhu, Y. Luo, H. Wei, Y. Li, G. Qi, N. Mazur, Y. Li, and P. Li, Atmospheric light estimation based remote sensing image dehazing, Remote Sens., vol. 13, no. 13, p. 2432, 2021.
[9]
Z. Zhu, H. Wei, G. Hu, Y. Li, G. Qi, and N. Mazur, A novel fast single image dehazing algorithm based on artificial multiexposure image fusion, IEEE Trans. Instrum. Meas., vol. 70, p. 5001523, 2021.
[10]
K. G. Lore, A. Akintayo, and S. Sarkar, LLNet: A deep autoencoder approach to natural low-light image enhancement, Pattern Recognit., vol. 61, pp. 650–662, 2017.
[11]
C. Wei, W. Wang, W. Yang, and J. Liu, Deep Retinex decomposition for low-light enhancement, presented at the British Machine Vision Conf., Newcastle, UK, 2018.
[12]
F. Lv, F. Lu, J. Wu, and C. Lim, MBLLEN: Low-light image/video enhancement using CNNs, presented at the British Machine Vision Conf., Newcastle, UK, 2018.
[13]
S. Lim and W. Kim, DSLR: Deep stacked Laplacian restorer for low-light image enhancement, IEEE Trans. Multimedia, vol. 23, pp. 4272–4284, 2021.
[14]
W. Wang, C. Wei, W. Yang, and J. Liu, GLADNet: Low-light enhancement network with global awareness, in Proc. 13th IEEE Int. Conf. on Automatic Face & Gesture Recognition, Xi’an, China, 2018, pp. 751–755.
[15]
Y. Zhang, X. Guo, J. Ma, W. Liu, and J. Zhang, Beyond brightening low-light images, Int. J. Comput. Vision, vol. 129, no. 4, pp. 1013–1037, 2021.
[16]
Z. Zhao, B. Xiong, L. Wang, Q. Ou, L. Yu, and F. Kuang, RetinexDIP: A unified deep framework for low-light image enhancement, IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 3, pp. 1076–1088, 2022.
[17]
W. Wang, Z. Chen, and X. Yuan, Simple low-light image enhancement based on Weber-Fechner law in logarithmic space, Signal Process.: Image Commun., vol. 106, p. 116742, 2022.
[18]
D. Liang, L. Li, M. Wei, S. Yang, L. Zhang, W. Yang, Y. Du, and H. Zhou, Semantically contrastive learning for low-light image enhancement, in Proc. 36th AAAI Conf. on Artificial Intelligence, 2022, pp. 1555–1563.
[19]
Z. Ying, G. Li, Y. Ren, R. Wang, and W. Wang, A new low-light image enhancement algorithm using camera response model, in Proc. 2017 IEEE Int. Conf. on Computer Vision Workshops, Venice, Italy, 2017, pp. 3015–3022.
[20]
A. Zhu, L. Zhang, Y. Shen, Y. Ma, S. Zhao, and Y. Zhou, Zero-Shot restoration of underexposed Images via robust Retinex decomposition, in Proc. 2020 IEEE Int. Conf. on Multimedia and Expo, London, UK, 2020, pp. 1–6.
[21]
C. Guo, C. Y. Li, J. Guo, C. C. Loy, J. Hou, S. Kwong, and R. Cong, Zero-reference deep curve estimation for low-light image enhancement, in Proc. 2020 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 1777–1786.
[22]
Y. Jiang, X. Gong, D. Liu, Y. Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, and Z. Wang, EnlightenGAN: Deep light enhancement without paired supervision, IEEE Trans. Image Process., vol. 30, pp. 2340–2349, 2021.
[23]
S. Wang, J. Zheng, H. Hu, and B. Li, Naturalness preserved enhancement algorithm for non-uniform illumination images, IEEE Trans. Image Process., vol. 22, no. 9, pp. 3538–3548, 2013.
[24]
C. Lee, C. Lee, Y. Y. Lee, and C. Kim, Power-constrained contrast enhancement for emissive displays based on histogram equalization, IEEE Trans. Image Process., vol. 21, no. 1, pp. 80–93, 2012.
[25]
Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, Image quality assessment: From error visibility to structural similarity, IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, 2004.
[26]
L. Zhang, L. Zhang, X. Mou, and D. Zhang, FSIM: A feature similarity index for image quality assessment, IEEE Trans. Image Process., vol. 20, no. 8, pp. 2378–2386, 2011.
[27]
W. Xue, L. Zhang, X. Mou, and A. C. Bovik, Gradient magnitude similarity deviation: A highly efficient perceptual image quality index, IEEE Trans. Image Process., vol. 23, no. 2, pp. 684–695, 2014.
[28]
A. Mittal, R. Soundararajan, and A. C. Bovik, Making a “completely blind” image quality analyzer, IEEE Signal Process. Lett., vol. 20, no. 3, pp. 209–212, 2013.
File
743-753ESM.pdf (17.3 MB)
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 21 May 2022
Revised: 10 August 2022
Accepted: 08 October 2022
Published: 06 January 2023
Issue date: August 2023

Copyright

© The author(s) 2023.

Acknowledgements

This work was supported by the Shanghai Aerospace Science and Technology Innovation Fund (No. SAST2019-048) and the Cross-Media Intelligent Technology Project of Beijing National Research Center for Information Science and Technology (BNRist) (No. BNR2019TD01022).

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return