Journal Home > Volume 27 , Issue 4

The pooling operation is used in graph classification tasks to leverage hierarchical structures preserved in data and reduce computational complexity. However, pooling shrinkage discards graph details, and existing pooling methods may lead to the loss of key classification features. In this work, we propose a residual convolutional graph neural network to tackle the problem of key classification features losing. Particularly, our contributions are threefold: (1) Different from existing methods, we propose a new strategy to calculate sorting values and verify their importance for graph classification. Our strategy does not only use features of simple nodes but also their neighbors for the accurate evaluation of its importance. (2) We design a new graph convolutional layer architecture with the residual connection. By feeding discarded features back into the network architecture, we reduce the probability of losing critical features for graph classification. (3) We propose a new method for graph-level representation. The messages for each node are aggregated separately, and then different attention levels are assigned to each node and merged into a graph-level representation to retain structural and critical information for classification. Our experimental results show that our method leads to state-of-the-art results on multiple graph classification benchmarks.


menu
Abstract
Full text
Outline
About this article

Residual Convolutional Graph Neural Network with Subgraph Attention Pooling

Show Author's information Yutai DuanJianming WangHaoran MaYukuan Sun( )
Information and Communication Engineering Department, Tiangong University, Tianjin 300387, China
Computer Science Department, Tiangong University, Tianjin 300387, China
Software Engineering Department, Tiangong University, Tianjin 300387, China
Center for Engineering Intership and Training, Tiangong University, Tianjin 300387, China

Abstract

The pooling operation is used in graph classification tasks to leverage hierarchical structures preserved in data and reduce computational complexity. However, pooling shrinkage discards graph details, and existing pooling methods may lead to the loss of key classification features. In this work, we propose a residual convolutional graph neural network to tackle the problem of key classification features losing. Particularly, our contributions are threefold: (1) Different from existing methods, we propose a new strategy to calculate sorting values and verify their importance for graph classification. Our strategy does not only use features of simple nodes but also their neighbors for the accurate evaluation of its importance. (2) We design a new graph convolutional layer architecture with the residual connection. By feeding discarded features back into the network architecture, we reduce the probability of losing critical features for graph classification. (3) We propose a new method for graph-level representation. The messages for each node are aggregated separately, and then different attention levels are assigned to each node and merged into a graph-level representation to retain structural and critical information for classification. Our experimental results show that our method leads to state-of-the-art results on multiple graph classification benchmarks.

Keywords: graph neural network, graph pooling, information loss

References(28)

[1]
Y. LeCun, Y. Bengio, and G. Hinton, Deep learning, Nature, vol. 521, no. 7553, pp. 436-444, 2015.
[2]
T. Mikolov, G. Corrado, K. Chen, and J. Dean, Efficient estimation of word representations in vector space, in Proc. of the International Conference on Learning Representations, Scottsdale, AZ, USA, 2013.
[3]
J. Atwood and D. Towsley, Diffusion-convolutional neural networks, in Proc. of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 2016, pp. 2001-2009.
[4]
T. N. Kipf and M. Welling, Semi-supervised classification with graph convolutional networks, presented at the 5th International Conference on Learning Representations, Toulon, France, 2017.
[5]
Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu, A comprehensive survey on graph neural networks, IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 1, pp. 4-24, 2021.
[6]
J. Lee, I. Lee, and J. Kang, Self-attention graph pooling, in Proc. the 36th International Conference on Machine Learning, Long Beach, CA, USA, 2019, pp. 3734-3743.
[7]
K. Zhou, Y. Dong, K. Wang, W. S. Lee, B. Hooi, H. Xu, and J. Feng, Understanding and resolving performance degradation in graph convolutional networks, https://arxiv.org/abs/2006.07107, 2021.
DOI
[8]
K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 770-778.
DOI
[9]
W. L. Hamilton, R. Ying, and J. Leskovec, Inductive representation learning on large graphs, in Proc. of the 31th NIPS. Conference on Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 1025-1035.
[10]
P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, Graph attention networks, presented at the 6th International Conference on Learning Representations, Vancouver, Canada, 2018.
[11]
G. Li, M. Müller, A. K. Thabet, and B. Ghanem, Inductive DeepGCNs: Can GCNs go as deep as CNNs? in Proc. of the 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 2017, pp. 9266-9275.
[12]
Y. Rong, W. Huang, T. Xu, and J. Huang, DropEdge: Towards deep graph convolutional networks on node classification, presented at the 8th International Conference on Learning Representations, Addis Ababa, Ethiopia, 2020.
[13]
R. Ying, J. You, C. Morris, X. Ren, W. L. Hamilton, and J. Leskovec, Hierarchical graph representation learning with differentiable pooling, in Proc. of the 32th International Conference on Neural Information Processing Systems, Montréal, Canada, 2018, pp. 4805-4815.
[14]
H. Gao and S. Ji, Graph U-nets, in Proc. of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 2019, pp. 2083-2092.
[15]
E. Ranjan, S. Sanyal, and P. Talukdar, ASAP: Adaptive structure aware pooling for learning hierarchical graph representations, in Proc. of the 34th AAAI Conference on Artificial Intelligence, New York, NY, USA, 2020, pp. 5470-5477.
DOI
[16]
D. Chen, Y. Lin, W. Li, P. Li, J. Zhou, and X. Sun, Measuring and relieving the over-smoothing problem for graph neural networks from the topological view, in Proc. 34th AAAI Conference on Artificial Intelligence, New York, NY, USA, 2020, pp. 3438-3445.
DOI
[17]
M. Fey and J. E. Lenssen, Fast graph representation learning with PyTorch geometric, presented at the 7th International Conference on Learning Representations, New Orleans, LA, USA, 2019.
[18]
P. D. Dobson and A. J. Doig, Distinguishing enzyme structures from non-enzymes without alignments, Journal of Molecular Biology, vol. 330, no. 4, pp. 771-783, 2003.
[19]
K. M. Abrahams, C. S. Ong, S. Schönauer, S. V. N. Vishwanathan, A. J. Smola, and H. P. Kriegel, Protein function prediction via graph kernels, Bioinformatics, vol. 21, no. 1, pp. 47-56, 2005.
[20]
N. Shervashidze, P. Schweitzer, E. J. van Leeuwen, K. Mehlhorn, and K. M. Borgwardt, Weisfeiler-lehman graph kernels, The Journal of Machine Learning Research, vol. 12, no. 3, pp. 2539-2561, 2011.
[21]
N. Wale, I. A. Watson, and G. Karypis, Comparison of descriptor spaces for chemical compound retrieval and classification, Knowledge and Information Systems, vol. 14, no. 3, pp. 347-375, 2008.
[22]
O. Vinyals, S. Bengio, and M. Kudlur, Order matters: Sequence to sequence for sets, presented at the 4th International Conference on Learning Representations, San Juan, Puerto Rico, 2016.
[23]
M. Zhang, Z. Cui, M. Neumann, and Y. Chen, An end-to-end deep learning architecture for graph classification, in Proc. 32th AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2018, pp. 1127-1137.
[24]
Z. Ma, J. Xuan, Y. G. Wang, M. Li, and P. Liò, Path integral based convolution and pooling for graph neural networks, presented at the 33th Conference on Neural Information Processing Systems, Vancouver, Canada, 2020.
[25]
A. Micheli, Neural network for graphs: A contextual constructive approach, IEEE Transactions on Neural Networks, vol. 20, no. 3, pp. 498-511, 2009.
[26]
M. Chen, Z. Wei, Z. Huang, B. Ding, and Y. Li, Simple and deep graph convolutional networks, in Proc. of the 37th International Conference on Machine Learning, New Orleans, LA, USA, 2020, pp. 1725-1735.
[27]
E. Chien, J. Peng, P. Li, and O. Milenkovic, Adaptive universal generalized PageRank graph neural network, presented at the 8th International Conference on Learning Representations, Addis Ababa, Ethiopia, 2020.
[28]
K. Xu, W. Hu, J. Leskovec, and S. Jegelka, How powerful are graph neural networks? presented at the 7th ICLR International Conference on Learning Representations, New Orleans, LA, USA, 2019.
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 27 April 2021
Revised: 08 July 2021
Accepted: 30 July 2021
Published: 09 December 2021
Issue date: August 2022

Copyright

© The author(s) 2022

Acknowledgements

This work was supported by the National Natural Science Foundation of China (No. 62072335) and the Tianjin Science and Technology Program (No. 19PTZWHZ00020).

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return