References(37)
[1]
D. Bahdanau, K. Cho, and Y. Bengio, Neural machine translation by jointly learning to align and translate, presented at the 2015 International Conference on Learning Representations, San Diego, CA, USA, 2015.
[2]
K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio, Show, attend and tell: Neural image caption generation with visual attention, in Proc. of the 32nd International Conference on Machine Learning, Lile, France, 2015, pp. 2048–2057.
[3]
J. Lu, J. Yang, D. Batra, and D. Parikh, Hierarchical question-image co-attention for visual question answering, in Proc. of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 2016, 289–297.
[4]
Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition, Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
[5]
J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, Squeeze-and-excitation networks, In Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., Silver Spring, MD, USA, 2018, pp. 7132–7141.
[6]
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł Kaiser, and I. Polosukhin, Attention is all you need, in Proc. of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 6000–6010.
[7]
F. Wang, M. Jiang, C. Qian, S. Yang, C. Li, H. Zhang, X. Wang, and X. Tang, Residual attention network for image classification, in Proc. of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 3156–3164.
[8]
S. Woo, J. Park, J. Y. Lee, and I. S. Kweon, Cbam: Convolutional block attention module, in Proc. of the 15th European Conference on Computer Vision, Munich, Germany, 2018, pp. 3–19.
[9]
X. Wang, R. Girshick, A. Gupta, and K. He, Non-local neural networks, in Proc. of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 7794–7803.
[10]
J. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, and H. Lu, Dual attention network for scene segmentation, in Proc. of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 2019, pp. 3146–3154.
[11]
M. Jaderberg, K. Simonyan, A. Zisserman, and K. Kavukcuoglu, Spatial transformer networks, in Proc. of the 28th International Conference on Neural Information Processing Systems, Montreal, Canada, 2015, pp. 2017–2025.
[12]
W. T. Chan, F. Y. L. Chin, D. Ye, G. Zhang, and Y. Zhang, On-line scheduling of parallel jobs on two machines, Journal of Discrete Algorithms, vol. 6, no. 1, pp. 3–10, 2008.
[13]
R. Xin, J. Zhang, and Y. Shao, Complex network classification with convolutional neural network, Tsinghua Science and Technology, vol. 25, no. 4, pp. 447–457, 2020.
[14]
W. T. Chan, Y. Zhang, S. P. Y. Fung, D. Ye, and H. Zhu, Efficient algorithms for finding a longest common increasing subsequence, Journal of Combinatorial Optimization, vol. 13, no. 3, pp. 277–288, 2007.
[15]
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, Generative adversarial nets, Advances in Neural Information Processing Systems, vol. 27, 2672–2680, 2014.
[16]
P. Isola, J. Y. Zhu, T. Zhou, and A. A. Efros, Image-to-imagetranslation with conditional adversarial networks, in Proc. of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 1125–1134.
[17]
J. Y. Zhu, R. Zhang, D. Pathak, T. Darrell, A. A. Efros, O. Wang, and E. Shechtman, Multimodal image-to-image translation by enforcing bi-cycle consistency, in Proc. of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 465–476.
[18]
T. Wang, M. Liu, J. Zhu, A. Tao, J. Kautz, and B. Catanzaro, High-resolution image synthesis and semantic manipulation with conditional GANs, in Proc. of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 8798–8807.
[19]
X. Liang, H. Zhang, L. Lin, and E. Xing, Generative semantic manipulation with mask-contrasting GAN, in Proc. of the 15th European Conference on Computer Vision, Munich, Germany, 2018, pp. 574–590.
[20]
X. Chen, C. Xu, X. Yang, and D. Tao, Attention-gan for object transfiguration in wild images, in Proc. of the 15th European Conference on Computer Vision, Munich, Germany, 2018, pp. 167–184.
[21]
Y. A. Mejjati, C. Richardt, J. Tompkin, D. Cosker, and K. I. Kim, Unsupervised attention-guided image to image translation, in Proc. of the 32nd International Conference on Neural Information Processing Systems, Montréal, Canada, 2018, pp. 3697–3707.
[22]
J. Y. Zhu, T. Park, P. Isola, and A. A. Efros, Unpaired image-to-image translation using cycle-consistent adversarial networks, in Proc. of the 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017, pp. 2223–2232.
[23]
M. Mirza and S. Osindero, Conditional generative adversarial nets, arXiv preprint arXiv:1411.1784, 2014.
[24]
E. Shelhamer, J. Long, and T. Darrell, Fully convolutional networks for semantic segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 4, pp. 640–651, 2017.
[25]
M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, GANs trained by a two time-scale update rule converge to a local Nash equilibrium, in Proc. of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 6629–6640.
[26]
M. Bińkowski, D. J. Sutherland, M. Arbel, and A. Gretton, Demystifying MMD GANs, presented at International Conference on Learning Representations, Vancouver, Cananda, 2018.
[27]
A. Krizhevsky, Learning multiple layers of features from tiny images, Technical report, University of Toronto, Toronto, Canada, 2009.
[28]
K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556, 2014.
[29]
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, Going deeper with convolutions, in Proc. of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 2015, pp. 1–9.
[30]
K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, in Proc. of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 770–778.
[31]
G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, Densely connected convolutional networks, in Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 4700–4708.
[32]
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, Rethinking the inception architecture for computer vision, in Proc. of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 2818–2826.
[33]
S. Ioffe and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, in Proc. of the 32nd International Conference on Machine Learning, Lile, France, 2015, pp. 448–456.
[34]
Z. Yi, H. Zhang, P. Tan, and M. Gong, Dualgan: Unsupervised dual learning for image-to-image translation, in Proc. of the 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017, pp. 2849–2857.
[35]
T. Kim, M. Cha, H. Kim, J. K. Lee, and J. Kim, Learning to discover cross-domain relations with generative adversarial networks, in Proc. of the 34th International Conference on Machine Learning, Sydney, Australia, 2017, pp. 1857–1865.
[37]
B. Hui, Y. Liu, J. Qiu, L. Cao, L. Ji, and Z. He, Study of texture segmentation and classification for grading small hepatocellular carcinoma based on CT images, Tsinghua Science and Technology, vol. 26, no. 2, pp. 199–207, 2020.