References(36)
[1]
X. Lu, W. Wang, J. Shen, Y. W. Tai, D. J. Crandall, and S. C. H. Hoi, Learning video object segmentation from unlabeled videos, in Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 8957–8967.
[2]
J. Guo, X. Zhu, C. Zhao, D. Cao, Z. Lei, and S. Z. Li, Learning meta face recognition in unseen domains, in Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 6162–6171.
[3]
M. Danelljan, L. Van Gool, and R. Timofte, Probabilistic regression for visual tracking, in Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 7181–7190.
[4]
D. J. Jobson, Z. Rahman, and G. A. Woodell, Properties and performance of a center/surround retinex, IEEE Trans. Image Process., vol. 6, no. 3, pp. 451–462, 1997.
[5]
Z. Rahman, D. J. Jobson, and G. A. Woodell, Multi-scale retinex for color image enhancement, in Proc. of 3rd IEEE Int. Conf. on Image Processing, Lausanne, Switzerland, 1996, pp. 1003–1006.
[6]
D. J. Jobson, Z. Rahman, and G. A. Woodell, A multiscale retinex for bridging the gap between color images and the human observation of scenes, IEEE Trans. Image Process., vol. 6, no. 7, pp. 965–976, 1997.
[7]
M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo, Structure-revealing low-light image enhancement via robust retinex model, IEEE Trans. Image Process., vol. 27, no. 6, pp. 2828–2841, 2018.
[8]
X. Guo, Y. Li, and H. Ling, LIME: Low-light image enhancement via illumination map estimation, IEEE Trans. Image Process., vol. 26, no. 2, pp. 982–993, 2017.
[9]
Y. Jiang, X. Gong, D. Liu, Y. Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, and Z. Wang, EnlightenGAN: Deep light enhancement without paired supervision, IEEE Trans. Image Process., vol. 30, pp. 2340–2349, 2021.
[10]
C. Li, C. Guo, and C. C. Loy, Learning to enhance low-light image via zero-reference deep curve estimation, IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 8, pp. 4225–4238, 2022.
[11]
C. Wei, W. Wang, W. Yang, and J. Liu, Deep retinex decomposition for low-light enhancement, in Proc. British Machine Vision Conf., Newcastle, UK, 2018, p. 155.
[12]
V. Bychkovsky, S. Paris, E. Chan, and F. Durand, Learning photographic global tonal adjustment with a database of input/output image pairs, in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 2011, pp. 97–104.
[13]
X. Fu, Y. Liao, D. Zeng, Y. Huang, X. P. Zhang, and X. Ding, A probabilistic method for image enhancement with simultaneous illumination and reflectance estimation, IEEE Trans. Image Process., vol. 24, no. 12, pp. 4965–4977, 2015.
[14]
B. L. Cai, X. Xu, K. Guo, K. Jia, B. Hu, and D. Tao, A joint intrinsic-extrinsic prior model for retinex, in Proc. IEEE Int. Conf. on Computer Vision, Venice, Italy, 2017, pp. 4020–4029.
[15]
R. Wang, Q. Zhang, C. W. Fu, X. Shen, W. Zheng, and J. Jia, Underexposed photo enhancement using deep illumination estimation, in Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 2019, pp. 6842–6850.
[16]
L. Ma, R. Liu, J. Zhang, X. Fan, and Z. Luo, Learning deep context-sensitive decomposition for low-light image enhancement, IEEE Trans. Neural Netw. Learn. Syst., .
[17]
R. Liu, L. Ma, Y. Zhang, X. Fan, and Z. Luo, Underexposed image correction via hybrid priors navigated deep propagation, IEEE Trans. Neural Netw. Learn. Syst., .
[18]
X. Fu, D. Zeng, Y. Huang, X. P. Zhang, and X. Ding, A weighted variational model for simultaneous reflectance and illumination estimation, in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 2782–2790.
[19]
X. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang, and S. P. Smolley, Least squares generative adversarial networks, in Proc. IEEE Int. Conf. on Computer Vision, Venice, Italy, 2017, pp. 2813–2821.
[20]
W. Yang, S. Wang, Y. Fang, Y. Wang, and J. Liu, From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement, in Proc. IEEE/CVF Conf. on Computer Vision and Pattern recognition, Seattle, WA, USA, 2020, pp. 3060–3069.
[21]
K. Xu, X. Yang, B. Yin, and R. W. H. Lau, Learning to restore low-light images via decomposition-and-enhancement, in Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 2278–2287.
[22]
R. Li, R. T. Tan, and L. F. Cheong, All in one bad weather removal using architectural search, in Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 3172–3182.
[23]
H. Zhang, Y. Li, H. Chen, and C. Shen, Memory-efficient hierarchical neural architecture search for image denoising, in Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 3654–3663.
[24]
M. Moejko, T. Latkowski, . Treszczotko, M. Szafraniuk, and K. Trojanowski, Superkernel neural architecture search for image denoising, in Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 2020, pp. 2002–2011.
[25]
R. Liu, L. Ma, J. Zhang, X. Fan, and Z. Luo, Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement, in Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Nashville, TN, USA, 2021, pp. 10556–10565.
[26]
J. Liu, J. Tang, and G. Wu, Residual feature distillation network for lightweight image super-resolution, in Proc. European Conf. on Computer Vision, Glasgow, UK, 2020, pp. 41–55.
[27]
H. Liu, K. Simonyan, and Y. Yang, DARTS: Differentiable architecture search, presented at the 7th Int. Conf. on Learning Representations, New Orleans, LA, USA, 2019.
[28]
Y. Xu, L. Xie, X. Zhang, X. Chen, G. J. Qi, Q. Tian, and H. Xiong, PC-DARTS: Partial channel connections for memory-efficient architecture search, presented at the 8th Int. Conf. on Learning Representations, Addis Ababa, Ethiopia, 2020.
[29]
H. Liang, S. Zhang, J. Sun, X. He, W. Huang, K. Zhuang, and Z. Li, Darts+: Improved differentiable architecture search with early stopping, arXiv preprint arXiv: 1909.06035, 2019.
[30]
Y. Hu, X. Wu, and R. He, TF-NAS: Rethinking three search freedoms of latency-constrained differentiable neural architecture search, in Proc. 16th European Conf. on Computer Vision, Glasgow, UK, 2020, pp. 123–139.
[31]
J. Johnson, A. Alahi, and L. Fei-Fei, Perceptual losses for real-time style transfer and super-resolution, in Proc.14th European Conf. on Computer Vision, Amsterdam, The Netherlands, 2016, pp. 694–711.
[32]
J. Guo, K. Han, Y. Wang, C. Zhang, Z. Yang, H. Wu, X. Chen, and C. Xu, Hit-detector: Hierarchical trinity architecture search for object detection, in Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 11402–11411.
[33]
H. Zhang, Y. Li, H. Chen, and C. Shen, Memory-efficient hierarchical neural architecture search for image denoising, in Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 3654–3663.
[34]
Y. Zhang, J. Zhang, and X. Guo, Kindling the darkness: A practical low-light image enhancer, in Proc. 27th ACM Int. Conf. on Multimedia, Nice, France, 2019, pp. 1632–1640.
[35]
L. C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, Encoder-decoder with atrous separable convolution for semantic image segmentation, in Proc. 15th European Conf. on Computer Vision, Munich, Germany, 2018, pp. 833–851.
[36]
C. Sakaridis, D. Dai, and L. Van Gool, ACDC: The adverse conditions dataset with correspondences for semantic driving scene understanding, in Proc. IEEE/CVF Int. Conf. on Computer Vision, Montreal, Canada, 2021, pp. 10745–10755.