AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (6.6 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Fake News Detection: Extendable to Global Heterogeneous Graph Attention Network with External Knowledge

School of Future Technology, Xinjiang University, Urumqi 830017, China
Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
Show Author Information

Abstract

Distinguishing genuine news from false information is crucial in today’s digital era. Most of the existing methods are based on either the traditional neural network sequence model or graph neural network model that has become more popularity in recent years. Among these two types of models, the latter solve the former’s problem of neglecting the correlation among news sentences. However, one layer of the graph neural network only considers the information of nodes directly connected to the current nodes and omits the important information carried by distant nodes. As such, this study proposes the Extendable-to-Global Heterogeneous Graph Attention network (namely EGHGAT) to manage heterogeneous graphs by cleverly extending local attention to global attention and addressing the drawback of local attention that can only collect information from directly connected nodes. The shortest distance matrix is computed among all nodes on the graph. Specifically, the shortest distance information is used to enable the current nodes to aggregate information from more distant nodes by considering the influence of different node types on the current nodes in the current network layer. This mechanism highlights the importance of directly or indirectly connected nodes and the effect of different node types on the current nodes, which can substantially enhance the performance of the model. Information from an external knowledge base is used to compare the contextual entity representation with the entity representation of the corresponding knowledge base to capture its consistency with news content. Experimental results from the benchmark dataset reveal that the proposed model significantly outperforms the state-of-the-art approach. Our code is publicly available at https://github.com/gyhhk/EGHGAT_FakeNewsDetection.

References

[1]

H. Allcott and M. Gentzkow, Social media and fake news in the 2016 election, J. Econom. Perspect., vol. 31, no. 2, pp. 211–236, 2017.

[2]

N. K. Conroy, V. L. Rubin, and Y. Chen, Automatic deception detection: Methods for finding fake news, Proc. Assoc. Inf. Sci. Technol., vol. 52, no. 1, pp. 1–4, 2015.

[3]
V. L. Rubin, N. Conroy, Y. Chen, and S. Cornwell, Fake news or truth? Using satirical cues to detect potentially misleading news, in Proc. Second Workshop on Computational Approaches to Deception Detection, San Diego, CA, USA, 2016, pp. 7–17.
[4]
H. Rashkin, E. Choi, J. Y. Jang, S. Volkova, and Y. Choi, Truth of varying shades: Analyzing language in fake news and political fact-checking, in Proc. 2017 Conf. Empirical Methods in Natural Language Processing, Copenhagen, Denmark, 2017, pp. 2931–2937.
[5]

K. Shu, D. Mahudeswaran, S. Wang, D. Lee, and H. Liu, Fakenewsnet: A data repository with news content, social context, and spatiotemporal information for studying fake news on social media, Big Data, vol. 8, no. 3, pp. 171–188, 2020.

[6]
J. Ma, W. Gao, P. Mitra, S. Kwon, B. J. Jansen, K. F. Wong, and M. Cha, Detecting rumors from microblogs with recurrent neural networks, in Proc. Twenty-Fifth Int. Joint Conf. Artificial Intelligence, New York, NY, USA, 2016, pp. 3818–3824.
[7]
J. Ma, W. Gao, and K. F. Wong, Detect rumor and stance jointly by neural multi-task learning, in Proc. Web Conf. 2018, Lyon, France, 2018, pp. 585–593.
[8]
M. Cheng, S. Nazarian, and P. Bogdan, VRoC: Variational autoencoder-aided multi-task rumor classifier based on text, in Proc. Web Conf. 2020, Taipei, China, 2020, pp. 2892–2898.
[9]
Y. Liu and Y. F. Wu, Early detection of fake news on social media through propagation path classification with recurrent and convolutional networks, in Proc. Thirty-Seventh AAAI Conf. Artificial Intelligence, Washington DC, USA, 2018, p. 44.
[10]

K. Xu, F. Wang, H. Wang, and B. Yang, Detecting fake news over online social media via domain reputations and content understanding, Tsinghua Science and Technology, vol. 25, no. 1, pp. 20–27, 2020.

[11]
V. Vaibhav, R. Mandyam, and E. Hovy, Do sentence interactions matter? Leveraging sentence level representations for fake news classification, in Proc. Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13 ), Hong Kong, China, 2019, pp. 134–139.
[12]
L. Hu, T. Yang, L. Zhang, W. Zhong, D. Tang, C. Shi, N. Duan, and M. Zhou, Compare to the knowledge: Graph neural fake news detection with external knowledge, in Proc. 59th Annu. Meeting of the Association for Computational Linguistics and the 11 th Int. Joint Conf. Natural Language Processing (Volume 1 : Long Papers ), Virtual Event, 2021, pp. 754–763.
[13]
F. Yu, Q. Liu, S. Wu, L. Wang, T. Tan, A convolutional approach for misinformation identification, in Proc. 26 th Int. Joint Conf. Artificial Intelligence, Melbourne, Australia, 2017, pp. 3901–3907.
[14]
K. Shu, S. Wang, and H. Liu, Beyond news contents: The role of social context for fake news detection, in Proc. Twelfth ACM Int. Conf. on Web Search and Data Mining, Melbourne, Australia, 2019, pp. 312–320.
[15]
Z. Kang, Y. Cao, Y. Shang, T. Liang, H. Tang, and L. Tong, Fake news detection with heterogenous deep graph convolutional network, in Proc. 25 th Pacific-Asia Conf. Knowledge Discovery and Data Mining, Virtual Event, 2021, pp. 408–420.
[16]
G. Wang, R. Ying, J. Huang, and J. Leskovec, Improving graph attention networks with large margin-based constraints, arXiv preprint arXiv: 1910.11945, 2019.
[17]
K. Xu, C. Li, Y. Tian, T. Sonobe, K. I. Kawarabayashi, and S. Jegelka, Representation learning on graphs with jumping knowledge networks, in Proc. 35 th Int. Conf. Machine Learning, Stockholmsmässan, Stockholm, 2018, pp. 5449–5458.
[18]
G. Wang, R. Ying, J. Huang, and J. Leskovec, Multi-hop attention graph neural network, in Proc. Thirtieth Int. Joint Conf. Artificial Intelligence, Montreal, Canada, 2021, pp. 3089–3096.
[19]
P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, Graph attention networks, in Proc. 6 th Int. Conf. Learning Representations, Vancouver, Canada, https://doi.org/10.48550/arXiv.1710.10903, 2023.
[20]
S. H. Kong, L. M. Tan, K. H. Gan, and N. H. Samsudin, Fake news detection using deep learning, in Proc. 2020 IEEE 10 th Symp. Computer Applications & Industrial Electronics (ISCAIE ), doi:10.1109/ISCA1E47305.2020.9108841.
[21]
Y. Wang, S. Qian, J. Hu, Q. Fang, and C. Xu, Fake news detection via knowledge-driven multimodal graph convolutional networks, in Proc. 2020 Int. Conf. Multimedia Retrieval, Dublin, Ireland, 2020, pp. 540–547.
[22]
J. Z. Pan, S. Pavlova, C. Li, N. Li, Y. Li, and J. Liu, Content based fake news detection using knowledge graphs, in Proc. 17 th Int. Semantic Web Conf., Monterey, CA, USA, 2018, pp. 669–683.
[23]
J. Gasteiger, S. Weißenberger, and S. Günnemann, Diffusion improves graph learning, in Proc. 33 rd Int. Conf. Neural Information Processing Systems, Vancouver, Canada, 2019, p. 1197.
[24]
J. Feng, Y. Chen, F. Li, A. Sarkar, and M. Zhang, How powerful are k-hop message passing graph neural networks, in Proc. Thirty-Sixth Conf. Neural Information Processing Systems, New Orleans, LA, USA, https://openreview. net/forum?id=nN3aVRQsxGd, 2023.
[25]

K. Li, L. Tian, X. Zheng, and B. Hui, Plausible heterogeneous graph $k$-anonymization for social networks, Tsinghua Science and Technology, vol. 27, no. 6, pp. 912–924, 2022.

[26]

X. Xu, T. Gao, Y. Wang, and X. Xuan, Event temporal relation extraction with attention mechanism and graph neural network, Tsinghua Science and Technology, vol. 27, no. 1, pp. 79–90, 2022.

[27]
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, Attention is all you need, in Proc. 31 st Int. Conf. Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 6000–6010.
[28]
A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., An image is worth 16x16 words: Transformers for image recognition at scale, arXiv preprint arXiv: 2010.11929v2.
[29]
D. Cai and W. Lam, Graph transformer for graph-to-sequence learning, in Proc. Thirty-Seventh AAAI Conf. Artificial Intelligence, Washington, DC, USA, 2020, pp. 7464–7471.
[30]
Z. Hu, Y. Dong, K. Wang, and Y. Sun, Heterogeneous graph transformer, in Proc. Web Conf. 2020, Taipei, China, 2020, pp. 2704–2710.
[31]
G. Wang, R. Ying, J. Huang, and J. Leskovec, Multi-hop attention graph neural network, arXiv preprint arXiv: 2009.14332, 2020.
[32]
C. Ying, T. Cai, S. Luo, S. Zheng, G. Ke, D. He, Y. Shen, and T. Y. Liu, Do transformers really perform badly for graph representation? arXiv preprint arXiv: 2016.05234v5.
[33]
W. Park, W. G. Chang, D. Lee, J. Kim, and S. W. Hwang, GRPE: Relative positional encoding for graph transformer, arXiv preprint arXiv: 2201.12787v3.
[34]
L. M. Hu, T. C. Yang, C. Shi, H. Y. Ji, and X. L. Li, Heterogeneous graph attention networks for semi-supervised short text classification, in Proc. 2019 Conf. on Empirical Methods in Natural Language Processing and the 9 th Int. Joint Conf. on Natural Language Processing (EMNLP-IJCNLP ), Hong Kong, China, 2019, pp. 4821–4830.
[35]
K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, in Proc. 2016 IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 770–778.
[36]

C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, Exploring the limits of transfer learning with a unified text-to-text transformer, J. Mach. Learn. Res., vol. 21, no. 1, p. 140, 2020.

[37]
T. N. Kipf and M. Welling, Semi-supervised classification with graph convolutional networks, in Proc. 5 th Int. Conf. Learning Representations, Toulon, France, 2017. doi: 10.48550/arXiv.1609.02907.
[38]

S. Hochreiter and J. Schmidhuber, Long short-term memory, Neur. Comput., vol. 9, no. 8, pp. 1735–1780, 1997.

[39]
J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, BERT: Pre-training of deep bidirectional transformers for language understanding, in Proc. 2019 Conf. North American Chapter of the Association for Computational Linguistics : Human Language Technologies, Volume 1 (Long and Short Papers ), Minneapolis, MN, USA, 2018, pp. 4171–4186.
[40]
Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, RoBERTa: A robustly optimized BERT pretraining approach, arXiv preprint arXiv: 1907.11692.
[41]
P. He, J. Gao, and W. Chen, DeBERTaV3: Improving DeBERTa using ELECTRA-style pre-training with gradient-disentangled embedding sharing, arXiv preprint arXiv: 2111.09543.
Tsinghua Science and Technology
Pages 1125-1138
Cite this article:
Guo Y, Qiao L, Yang Z, et al. Fake News Detection: Extendable to Global Heterogeneous Graph Attention Network with External Knowledge. Tsinghua Science and Technology, 2025, 30(3): 1125-1138. https://doi.org/10.26599/TST.2023.9010104

60

Views

3

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 29 May 2023
Revised: 29 August 2023
Accepted: 25 September 2023
Published: 30 December 2024
© The Author(s) 2025.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return