References(46)
[1]
M. Abdullah-Al-Wadud, M. H. Kabir, M. A. A. Dewan, and O. Chae, A dynamic histogram equalization for image contrast enhancement, IEEE Trans. Consum. Electron., vol. 53, no. 2, pp. 593-600, 2007.
[2]
H. Ibrahim and N. S. P. Kong, Brightness preserving dynamic histogram equalization for image contrast enhancement, IEEE Trans. Consum. Electron., vol. 53, no. 4, pp. 1752-1758, 2007.
[3]
C. Lee, C. Lee, and C. S. Kim, Contrast enhancement based on layered difference representation of 2D histograms, IEEE Trans. Image Process., vol. 22, no. 12, pp. 5372-5384, 2013.
[4]
K. Nakai, Y. Hoshi, and A. Taguchi, Color image contrast enhacement method based on differential intensity/saturation gray-levels histograms, in Proc. of the 2013 Int. Symp. Intelligent Signal Processing and Communication Systems, Naha, Japan, 2013, pp. 445-449.
[5]
S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, Adaptive histogram equalization and its variations, Comput. Vis. Graph. Image Process., vol. 39, no. 3, pp. 355-368, 1987.
[6]
X. J. Guo, Y. Li, and H. B. Ling, LIME: Low-light image enhancement via illumination map estimation, IEEE Trans. Image Process., vol. 26, no. 2, pp. 982-993, 2017.
[7]
D. J. Jobson, Z. Rahman, and G. A. Woodell, A multiscale retinex for bridging the gap between color images and the human observation of scenes, IEEE Trans. Image Process., vol. 6, no. 7, pp. 965-976, 1997.
[8]
M. D. Li, J. Y. Liu, W. H. Yang, X. Y. Sun, and Z. M. Guo, Structure-revealing low-light image enhancement via robust retinex model, IEEE Trans. Image Process., vol. 27, no. 6, pp. 2828-2841, 2018.
[9]
S. Park, S. Yu, B. Moon, S. Ko, and J. Paik, Low-light image enhancement using variational optimization-based retinex model, IEEE Trans. Consum. Electron., vol. 63, no. 2, pp. 178-184, 2017.
[10]
X. T. Ren, W. H. Yang, W. H. Cheng, and J. Y. Liu, LR3M: Robust low-light enhancement via low-rank regularized retinex model, IEEE Trans. Image Process., vol. 29, pp. 5862-5876, 2020.
[11]
S. H. Wang, J. Zheng, H. M. Hu, and B. Li, Naturalness preserved enhancement algorithm for non-uniform illumination images, IEEE Trans. Image Process., vol. 22, no. 9, pp. 3538-3548, 2013.
[12]
Z. Q. Ying, G. Li, and W. Gao, A bio-inspired multi-exposure fusion framework for low-light image enhancement, arXiv preprint arXiv: 1711.00591, 2017.
[13]
Y. S. Chen, Y. C. Wang, M. H. Kao, and Y. Y. Chuang, Deep photo enhancer: Unpaired learning for image enhancement from photographs with GANs, in Proc. of the the 2018 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 6306-6314.
[14]
A. Ignatov, N. Kobyshev, R. Timofte, and K. Vanhoey, DSLR-quality photos on mobile devices with deep convolutional networks, in Proc. of the 2017 IEEE Int. Conf. Computer Vision, Venice, Italy, 2017, pp. 3297-3305.
[15]
A. Ignatov, N. Kobyshev, R. Timofte, K. Vanhoey, and L. Van Gool, WESPE: Weakly supervised photo enhancer for digital cameras, in Proc. of the 2018 IEEE/CVF Conf. Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 2018, pp. 691-700.
[16]
R. X. Wang, Q. Zhang, C. W. Fu, X. Y. Shen, W. S. Zheng, and J. Y. Jia, Underexposed photo enhancement using deep illumination estimation, in Proc. of the 2019 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Long Beach, CA, USA, 2019, pp. 6842-6850.
[17]
K. G. Lore, A. Akintayo, and S. Sarkar, LLNet: A deep autoencoder approach to natural low-light image enhancement, Pattern Recogn., vol. 61, pp. 650-662, 2017.
[18]
K. Lu and L. H. Zhang, TBEFN: A two-branch exposure-fusion network for low-light image enhancement, IEEE Trans. Multimed., vol. 23, pp. 4093-4105, 2020.
[19]
F. F. Lv, Y. Li, and F. Lu, Attention-guided low-light image enhancement, arXiv preprint arXiv: 1908.00682, 2019.
[20]
F. F. Lv, F. Lu, J. H. Wu, and C. Lim, MBLLEN: Low-light image/video enhancement using CNNs, in British Machine Vision Conf. (BMVC), Northumbria, UK, 2018, p. 220.
[21]
W. Q. Ren, S. F. Liu, L. Ma, Q. Q. Xu, X. Y. Xu, X. C. Cao, J. P. Du, and M. H. Yang, Low-light image enhancement via a deep hybrid network, IEEE Trans. Image Process., vol. 28, no. 9, pp. 4364-4375, 2019.
[22]
Y. Wang, Y. Cao, Z. J. Zha, J. Zhang, Z. W. Xiong, W. Zhang, and F. Wu, Progressive retinex: Mutually reinforced illumination-noise perception network for low-light image enhancement, in Proc. 27th ACM Int. Conf. Multimedia, Nice, France, 2019, pp. 2015-2023.
[23]
C. Wei, W. J. Wang, W. H. Yang, and J. Y. Liu, Deep retinex decomposition for low-light enhancement, arXiv preprint arXiv: 1808.04560, 2018.
[24]
K. Xu, X. Yang, B. C. Yin, and R. W. H. Lau, Learning to restore low-light images via decomposition-and-enhancement, in Proc. of the 2020 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 2278-2287.
[25]
Y. H. Zhang, J. W. Zhang, and X. J. Guo, Kindling the darkness: A practical low-light image enhancer, in Proc. 27th ACM Int. Conf. Multimedia, Nice, France, 2019, pp. 1632-1640.
[26]
M. F. Zhu, P. B. Pan, W. Chen, and Y. Yang, EEMEFN: Low-light image enhancement via edge-enhanced multi-exposure fusion network, Proc. AAAI Conf. Artif. Intell., vol. 34, no. 7, pp. 13106-13113, 2020.
[27]
Y. F. Jiang, X. Y. Gong, D. Liu, Y. Cheng, C. Fang, X. H. Shen, J. C. Yang, P. Zhou, and Z. Y. Wang, EnlightenGAN: Deep light enhancement without paired supervision, IEEE Trans. Image Process., vol. 30, pp. 2340-2349, 2021.
[28]
W. Xiong, D. Liu, X. H. Shen, C. Fang, and J. B. Luo, Unsupervised real-world low-light image enhancement with decoupled networks, arXiv preprint arXiv: 2005.02818, 2020.
[29]
S. Anwar and N. Barnes, Real image denoising with feature attention, in Proc. of the 2019 IEEE/CVF Int. Conf. Computer Vision, Seoul, Republic of Korea, 2019, pp. 3155-3164.
[30]
J. Hu, L. Shen, and G. Sun, Squeeze-and-excitation networks, in Proc. of the 2018 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 7132-7141.
[31]
X. L. Wang, R. Girshick, A. Gupta, and K. M. He, Non-local neural networks, in Proc. of the 2018 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 7794-7803.
[32]
S. Woo, J. Park, J. Y. Lee, and I. S. Kweon, CBAM: Convolutional block attention module, in Proc. 15th European Conf. Computer Vision, Munich, Germany, 2018, pp. 3-19.
[33]
J. Y. Zhu, T. Park, P. Isola, and A. A. Efros, Unpaired image-to-image translation using cycle-consistent adversarial networks, in Proc. of the 2017 IEEE Int. Conf. Computer Vision, Venice, Italy, 2017, pp. 2242-2251.
[34]
M. H. Fan, W. J. Wang, W. H. Yang, and J. Y. Liu, Integrating semantic segmentation and retinex model for low-light image enhancement, in Proc. 28th ACM Int. Conf. Multimedia, Seattle, WA, USA, 2020, pp. 2317-2325.
[35]
K. C. K. Chan, X. T. Wang, X. Y. Xu, J. W. Gu, and C. C. Loy, GLEAN: Generative latent bank for large-factor image super-resolution, in Proc. of the 2021 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Nashville, TN, USA, 2021, pp. 14240-14249.
[36]
X. T. Wang, K. Yu, C. Dong, and C. C. Loy, Recovering realistic texture in image super-resolution by deep spatial feature transform, in Proc. of the 2018 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 606-615.
[37]
X. M. Li, C. F. Chen, S. C. Zhou, X. H. Lin, W. M. Zuo, and L. Zhang, Blind face restoration via deep multi-scale component dictionaries, in Proc. 16th European Conf. Computer Vision, Glasgow, UK, 2020, pp. 399-415.
[38]
X. Lin, Z. J. Wang, L. Z. Ma, and X. B. Wu, Saliency detection via multi-scale global cues, IEEE Trans. Multimed., vol. 21, no. 7, pp. 1646-1659, 2019.
[39]
Z. J. Wang, L. Z. Ma, X. Lin, and H. Zhong, Saliency detection via multi-center convex hull prior, in Proc. of the 2018 IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), Calgary, Canada, 2018, pp. 1867-1871.
[40]
Z. X. Wang, Z. Quan, Z. J. Wang, X. J. Hu, and Y. Y. Chen, Text to image synthesis with bidirectional generative adversarial network, in Proc. of the 2020 IEEE Int. Conf. Multimedia and Expo (ICME), London, UK, 2020, pp. 1-6.
[41]
W. Liu, Z. J. Wang, B. Yao, and J. Yin, Geo-ALM: Poi recommendation by fusing geographical information and adversarial learning mechanism, in Proc. 28th Int. Joint Conf. Artificial Intelligence, Macao, China, 2019, pp. 1807-1813.
[42]
X. Lin, L. Z. Ma, B. Sheng, Z. J. Wang, and W. S. Chen, Utilizing two-phase processing with FBLS for single image deraining, IEEE Trans. Multimed., vol. 23, pp. 664-676, 2020.
[43]
V. Mnih, N. Heess, A. Graves, and K. Kavukcuoglu, Recurrent models of visual attention, arXiv preprint arXiv: 1406.6247, 2014.
[44]
Q. L. Wang, B. G. Wu, P. F. Zhu, P. H. Li, W. M. Zuo, and Q. H. Hu, ECA-Net: Efficient channel attention for deep convolutional neural networks, in Proc. of the 2020 IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2020, pp. 11531-11539.
[45]
W. Z. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. H. Wang, Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network, in Proc. of the 2016 IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 1874-1883.
[46]
X. D. Mao, Q. Li, H. R. Xie, R. Y. K. Lau, Z. Wang, and S. P. Smolley, Least squares generative adversarial networks, in Proc. of the 2017 IEEE Int. Conf. Computer Vision, Venice, Italy, 2017, pp. 2813-2821.