AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (4.7 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Capturing Global Structural Features and Global Temporal Dependencies in Dynamic Social Networks Using Graph Convolutional Networks for Enhanced Analysis

Fujian Key Laboratory of Network Computing and Intelligent Information Processing, College of Computer and Data Science, Fuzhou University, Fuzhou 350108, China
School of Business, Xianda College of Economics and Humanities Shanghai International Studies University, Shanghai 202162, China
Show Author Information

Abstract

Modeling and analysis of complex social networks is an important topic in social computing. Graph convolutional networks (GCNs) are widely used for learning social network embeddings and social network analysis. However, real-world complex social networks, such as Facebook and Math, exhibit significant global structural and dynamic characteristics that are not adequately captured by conventional GCN models. To address the above issues, this paper proposes a novel graph convolutional network considering global structural features and global temporal dependencies (GSTGCN). Specifically, we innovatively design a graph coarsening strategy based on the importance of social membership to construct a dynamic diffusion process of graphs. This dynamic diffusion process can be viewed as using higher-order subgraph embeddings to guide the generation of lower-order subgraph embeddings, and we model this process using gate recurrent unit (GRU) to extract comprehensive global structural features of the graph and the evolutionary processes embedded among subgraphs. Furthermore, we design a new evolutionary strategy that incorporates a temporal self-attention mechanism to enhance the extraction of global temporal dependencies of dynamic networks by GRU. GSTGCN outperforms current state-of-the-art network embedding methods in important social networks tasks such as link prediction and financial fraud identification.

References

[1]

M. Balakrishnan and T. V. Geetha, Network alignment and link prediction using event-based embedding in aligned heterogeneous dynamic social networks, Appl. Intell., vol. 53, no. 20, pp. 24638–24654, 2023.

[2]
P. Veličković, W. Fedus, W. L. Hamilton, P. Liò, Y. Bengio, and R. D. Hjelm, Deep graph infomax, arXiv preprint arXiv: 1809.10341, 2018.
[3]
X. Wang, Y. Ma, Y. Wang, W. Jin, X. Wang, J. Tang, C. Jia, and J. Yu, Traffic flow prediction via spatial temporal graph neural network, in Proc. The Web Conf. 2020, Taipei, China, 2020, pp. 1082–1092.
[4]

S. Paul, D. Mukherjee, A. Mitra, A. Mitra, and P. K. Dutta, Unravelling the complex networks of social physics: Exploring human behavior, big data, and distributed systems, Journal of Social Computing, vol. 5, no. 2, pp. 165–179, 2024.

[5]

W. Gu, F. Gao, R. Li, and J. Zhang, Learning universal network representation via link prediction by graph convolutional neural network, Journal of Social Computing, vol. 2, no. 1, pp. 43–51, 2021.

[6]
K. Tu, P. Cui, X. Wang, P. S. Yu, and W. Zhu, Deep recursive network embedding with regular equivalence, in Proc. 24th ACM SIGKDD Int. Conf. Knowledge Discovery & Data Mining, London, UK, 2018, pp. 2357–2366.
[7]

K. Guo, J. Lin, Q. Zhuang, R. Zeng, and J. Wang, Adaptive graph contrastive learning for community detection, Appl. Intell., vol. 53, no. 23, pp. 28768–28786, 2023.

[8]

J. Zhou, L. Liu, W. Wei, and J. Fan, Network representation learning: From preprocessing, feature extraction to node embedding, ACM Comput. Surv., vol. 55, no. 2, pp. 1–35, 2023.

[9]
D. Wang, P. Cui, and W. Zhu, Structural deep network embedding, in Proc. 22nd ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, San Francisco, CA, USA, 2016, pp. 1225–1234.
[10]
L. Zhou, Y. Yang, X. Ren, F. Wu, and Y. Zhuang, Dynamic network embedding by modeling triadic closure process, presented at the 32nd AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2018.
[11]

D. Zhu, P. Cui, Z. Zhang, J. Pei, and W. Zhu, High-order proximity preserved embedding for dynamic networks, IEEE Trans. Knowl. Data Eng., vol. 30, no. 11, pp. 2134–2144, 2018.

[12]

C. Gao, J. Zhu, F. Zhang, Z. Wang, and X. Li, A novel representation learning for dynamic graphs based on graph convolutional networks, IEEE Trans. Cybern., vol. 53, no. 6, pp. 3599–3612, 2023.

[13]
A. Pareja, G. Domeniconi, J. Chen, T. Ma, T. Suzumura, H. Kanezashi, T. Kaler, T. B. Schardl, and C. E. Leiserson, EvolveGCN: Evolving graph convolutional networks for dynamic graphs, arXiv preprint arXiv: 1902.10191, 2019.
[14]
G. Sun, C. Zhang, and P. C. Woodland, Transformer language models with LSTM-based cross-utterance information representation, arXiv preprint arXiv: 2102.06474, 2021.
[15]

M. Doob and D. Cvetković, On spectral characterizations and embeddings of graphs, Linear Algebra Appl., vol. 27, pp. 17–26, 1979.

[16]
B. Perozzi, R. Al-Rfou, and S. Skiena, DeepWalk: Online learning of social representations, in Proc. 20th ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, New York, NY, USA, 2014, pp. 701–710.
[17]

X. Wang, P. Cui, J. Wang, J. Pei, W. Zhu, and S. Yang, Community preserving network embedding, Proc. AAAI Conf. Artif. Intell., vol. 31, no. 1, pp. 203–209, 2017.

[18]
T. N. Kipf and M. Welling, Semi-supervised classification with graph convolutional networks, arXiv preprint arXiv: 1609.02907, 2016.
[19]
W. L. Hamilton, R. Ying, and J. Leskovec, Inductive representation learning on large graphs, in Proc. 31st Int. Conf. Neural Inf. Process. Syst., Long Beach, CA, USA, 2017, pp. 1025–1035.
[20]
P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, Graph attention networks, arXiv preprint arXiv: 1710.10903, 2017.
[21]
Z. Peng, W. Huang, M. Luo, Q. Zheng, Y. Rong, T. Xu, and J. Huang, Graph representation learning via graphical mutual information maximization, in Proc. The Web Conf. 2020, Taipei, China, 2020, pp. 259–270.
[22]
H. Chen, B. Perozzi, Y. Hu, and S. Skiena, HARP: Hierarchical representation learning for networks, presented at the 32nd AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2018.
[23]
G. H. Nguyen, J. B. Lee, R. A. Rossi, N. K. Ahmed, E. Koh, and S. Kim, Continuous-time dynamic network embeddings, in Proc. Companion Web Conf. 2018, Lyon, France, 2018, pp. 969–976.
[24]
S. Kumar, X. Zhang, and J. Leskovec, Predicting dynamic embedding trajectory in temporal interaction networks, in Proc. 25th ACM SIGKDD Int. Conf. Knowledge Discovery & Data Mining, Anchorage, AK, USA, 2019, pp. 1269–1278.
[25]
K. Zhang, Q. Cao, G. Fang, B. Xu, H. Zou, H. Shen, and X. Cheng, DyTed: Disentangled representation learning for discrete-time dynamic graph, in Proc. 29th ACM SIGKDD Conf. Knowledge Discovery and Data Mining, Long Beach, CA, USA, 2023, pp. 3309–3320.
[26]
P. Goyal, N. Kamra, X. He, and Y. Liu, DynGEM: Deep embedding method for dynamic graphs, arXiv preprint arXiv: 1805.11273, 2018.
[27]

P. Goyal, S. R. Chhetri, and A. Canedo, dyngraph2vec: Capturing network dynamics using dynamic graph representation learning, Knowl. Based Syst., vol. 187, p. 104816, 2020.

[28]
Z. Zhang, P. Cui, J. Pei, X. Wang, and W. Zhu, TIMERS: Error-bounded SVD restart on dynamic networks, in Proc. 32nd Conf. Artificial Intelligence, New Orleans, LA, USA, 2018, pp. 1–8.
[29]

C. Y. Zhang, Z. L. Yao, H. Y. Yao, F. Huang, and C. L. P. Chen, Dynamic representation learning via recurrent graph neural networks, IEEE Trans. Syst. Man Cybern. Syst., vol. 53, no. 2, pp. 1284–1297, 2023.

[30]
A. Sankar, Y. Wu, L. Gou, W. Zhang, and H. Yang, DySAT: Deep neural representation learning on dynamic graphs via self-attention networks, in Proc. 13th Int. Conf. Web Search and Data Mining, Houston, TX, USA, 2020, pp. 519–527.
[31]

J. Huang, T. Lu, X. Zhou, B. Cheng, Z. Hu, W. Yu, and J. Xiao, HyperDNE: Enhanced hypergraph neural network for dynamic network embedding, Neurocomputing, vol. 527, pp. 155–166, 2023.

[32]
S. Li, X. Jin, Y. Xuan, X. Zhou, W. Chen, Y. X. Wang, and X. Yan, Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting, arXiv preprint arXiv: 1907.00235, 2019.
[33]
M. Niepert, M. Ahmed, and K. Kutzkov, Learning convolutional neural networks for graphs, arXiv preprint arXiv: 1605.05273, 2016.
[34]
H. Gao, Z. Wang, and S. Ji, Large-scale learnable graph convolutional networks, in Proc. 24th ACM SIGKDD Int. Conf. Knowledge Discovery & Data Mining, London, UK, 2018, pp. 1416–1424.
[35]

Y. Xie, C. Yao, M. Gong, C. Chen, and A. K. Qin, Graph convolutional networks with multi-level coarsening for graph classification, Knowl. Based Syst., vol. 194, p. 105578, 2020.

[36]

S. Brin and L. Page, The anatomy of a large-scale hypertextual Web search engine, Comput. Netw. ISDN Syst., vol. 30, pp. 107–117, 1998.

[37]

Y. Gao, X. Yu, and H. Zhang, Overlapping community detection by constrained personalized PageRank, Expert Syst. Appl., vol. 173, p. 114682, 2021.

[38]
D. Liben-Nowell and J. Kleinberg, The link prediction problem for social networks, in Proc. 12th Int. Conf. Information and Knowledge Management, New Orleans, LA, USA, 2003, pp. 556–559.
[39]

A. L. Barabasi and R. Albert, Emergence of scaling in random networks, Science, vol. 286, no. 5439, pp. 509–512, 1999.

[40]

C. Grabow, S. Grosskinsky, J. Kurths, and M. Timme, Collective relaxation dynamics of small-world networks, Phys. Rev. E Stat. Nonlin. Soft Matter Phys., vol. 91, no. 5, p. 052815, 2015.

[41]
K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, Learning phrase representations using RNN encoder-decoder for statistical machine translation, arXiv preprint arXiv: 1406.1078, 2014.
[42]
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, Attention is all you need, arXiv preprint arXiv: 1706.03762, 2023.
[43]
K. Xu, W. Hu, J. Leskovec, and S. Jegelka, How powerful are graph neural networks? arXiv preprint arXiv: 1810.00826, 2018.
[44]
Y. Seo, M. Defferrard, P. Vandergheynst, and X. Bresson, Structured sequence modeling with graph convolutional recurrent networks, in Neural Information Processing, L. Cheng, A. C. S. Leung, and S. Ozawa, eds. Cham, Switzerland: Springer, 2018, pp. 362–373.
[45]
L. Yu, L. Sun, B. Du, and W. Lyu, Towards better dynamic graph learning: New architecture and unified library, arXiv preprint arXiv: 2303.13047, 2023.
[46]
A. Grover and J. Leskovec, Node2vec: Scalable feature learning for networks, in Proc. 22nd ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, San Francisco, CA, USA, 2016, pp. 855–864.
[47]

L. van der Maaten and G. Hinton, Visualizing data using t-SNE, J. Mach. Learn. Res., vol. 9, pp. 2579–2605, 2008.

Journal of Social Computing
Pages 126-144
Cite this article:
Wu L, Li B, Guo K, et al. Capturing Global Structural Features and Global Temporal Dependencies in Dynamic Social Networks Using Graph Convolutional Networks for Enhanced Analysis. Journal of Social Computing, 2025, 6(2): 126-144. https://doi.org/10.23919/JSC.2025.0008

56

Views

7

Downloads

0

Crossref

0

Scopus

Altmetrics

Received: 17 December 2024
Revised: 29 April 2025
Accepted: 18 May 2025
Published: 30 June 2025
© The author(s) 2025.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return