References(28)
[1]
E. H. Land, The Retinex theory of color vision, Sci. Am., vol. 237, no. 6, pp. 108–128, 1977.
[2]
D. J. Jobson, Z. Rahman, and G. A. Woodell, Properties and performance of a center/surround Retinex, IEEE Trans. Image Process., vol. 6, no. 3, pp. 451–462, 1997.
[3]
D. J. Jobson, Z. Rahman, and G. A. Woodell, A multiscale Retinex for bridging the gap between color images and the human observation of scenes, IEEE Trans. Image Process., vol. 6, no. 7, pp. 965–976, 1997.
[4]
Z. Ying, G. Li, and W. Gao, A bio-inspired multi-exposure fusion framework for low-light image enhancement, arXiv preprint arXiv: 1711.00591, 2017.
[5]
X. Guo, Y. Li, and H. Ling, LIME: Low-light image enhancement via illumination map estimation, IEEE Trans. Image Process., vol. 26, no. 2, pp. 982–993, 2017.
[6]
K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering, IEEE Trans. Image Process., vol. 16, no. 8, pp. 2080–2095, 2007.
[7]
J. Zhong, B. Yang, G. Huang, F. Zhong, and Z. Chen, Remote sensing image fusion with convolutional neural network, Sens. Imag., vol. 17, no. 1, p. 10, 2016.
[8]
Z. Zhu, Y. Luo, H. Wei, Y. Li, G. Qi, N. Mazur, Y. Li, and P. Li, Atmospheric light estimation based remote sensing image dehazing, Remote Sens., vol. 13, no. 13, p. 2432, 2021.
[9]
Z. Zhu, H. Wei, G. Hu, Y. Li, G. Qi, and N. Mazur, A novel fast single image dehazing algorithm based on artificial multiexposure image fusion, IEEE Trans. Instrum. Meas., vol. 70, p. 5001523, 2021.
[10]
K. G. Lore, A. Akintayo, and S. Sarkar, LLNet: A deep autoencoder approach to natural low-light image enhancement, Pattern Recognit., vol. 61, pp. 650–662, 2017.
[11]
C. Wei, W. Wang, W. Yang, and J. Liu, Deep Retinex decomposition for low-light enhancement, presented at the British Machine Vision Conf., Newcastle, UK, 2018.
[12]
F. Lv, F. Lu, J. Wu, and C. Lim, MBLLEN: Low-light image/video enhancement using CNNs, presented at the British Machine Vision Conf., Newcastle, UK, 2018.
[13]
S. Lim and W. Kim, DSLR: Deep stacked Laplacian restorer for low-light image enhancement, IEEE Trans. Multimedia, vol. 23, pp. 4272–4284, 2021.
[14]
W. Wang, C. Wei, W. Yang, and J. Liu, GLADNet: Low-light enhancement network with global awareness, in Proc. 13th IEEE Int. Conf. on Automatic Face & Gesture Recognition, Xi’an, China, 2018, pp. 751–755.
[15]
Y. Zhang, X. Guo, J. Ma, W. Liu, and J. Zhang, Beyond brightening low-light images, Int. J. Comput. Vision, vol. 129, no. 4, pp. 1013–1037, 2021.
[16]
Z. Zhao, B. Xiong, L. Wang, Q. Ou, L. Yu, and F. Kuang, RetinexDIP: A unified deep framework for low-light image enhancement, IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 3, pp. 1076–1088, 2022.
[17]
W. Wang, Z. Chen, and X. Yuan, Simple low-light image enhancement based on Weber-Fechner law in logarithmic space, Signal Process.: Image Commun., vol. 106, p. 116742, 2022.
[18]
D. Liang, L. Li, M. Wei, S. Yang, L. Zhang, W. Yang, Y. Du, and H. Zhou, Semantically contrastive learning for low-light image enhancement, in Proc. 36th AAAI Conf. on Artificial Intelligence, 2022, pp. 1555–1563.
[19]
Z. Ying, G. Li, Y. Ren, R. Wang, and W. Wang, A new low-light image enhancement algorithm using camera response model, in Proc. 2017 IEEE Int. Conf. on Computer Vision Workshops, Venice, Italy, 2017, pp. 3015–3022.
[20]
A. Zhu, L. Zhang, Y. Shen, Y. Ma, S. Zhao, and Y. Zhou, Zero-Shot restoration of underexposed Images via robust Retinex decomposition, in Proc. 2020 IEEE Int. Conf. on Multimedia and Expo, London, UK, 2020, pp. 1–6.
[21]
C. Guo, C. Y. Li, J. Guo, C. C. Loy, J. Hou, S. Kwong, and R. Cong, Zero-reference deep curve estimation for low-light image enhancement, in Proc. 2020 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 1777–1786.
[22]
Y. Jiang, X. Gong, D. Liu, Y. Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, and Z. Wang, EnlightenGAN: Deep light enhancement without paired supervision, IEEE Trans. Image Process., vol. 30, pp. 2340–2349, 2021.
[23]
S. Wang, J. Zheng, H. Hu, and B. Li, Naturalness preserved enhancement algorithm for non-uniform illumination images, IEEE Trans. Image Process., vol. 22, no. 9, pp. 3538–3548, 2013.
[24]
C. Lee, C. Lee, Y. Y. Lee, and C. Kim, Power-constrained contrast enhancement for emissive displays based on histogram equalization, IEEE Trans. Image Process., vol. 21, no. 1, pp. 80–93, 2012.
[25]
Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, Image quality assessment: From error visibility to structural similarity, IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, 2004.
[26]
L. Zhang, L. Zhang, X. Mou, and D. Zhang, FSIM: A feature similarity index for image quality assessment, IEEE Trans. Image Process., vol. 20, no. 8, pp. 2378–2386, 2011.
[27]
W. Xue, L. Zhang, X. Mou, and A. C. Bovik, Gradient magnitude similarity deviation: A highly efficient perceptual image quality index, IEEE Trans. Image Process., vol. 23, no. 2, pp. 684–695, 2014.
[28]
A. Mittal, R. Soundararajan, and A. C. Bovik, Making a “completely blind” image quality analyzer, IEEE Signal Process. Lett., vol. 20, no. 3, pp. 209–212, 2013.