[1]
S. Ghosh, P. Singhania, S. Singh, K. Rudra, and S. Ghosh, Stance detection in web and social media: A comparative study, in Experimental IR Meets Multilinguality, Multimodality, and Interaction, F. Crestani, M. Braschler, J. Savoy, A. Rauber, H. Müller, D. E. Losada, G. H. Bürki, L. Cappellato, and N. Ferro, eds. Cham, Switzerland: Springer, 2019, pp. 75–87.
[2]
A. Sen, M. Sinha, S. Mannarswamy, and S. Roy, Stance classification of multi-perspective consumer health information, in Proc. ACM India Joint Int. Conf. Data Science and Management of Data, Goa, India, 2018, pp. 273–281.
[3]
K. Kawintiranon and L. Singh, Knowledge enhanced masked language model for stance detection, in Proc. 2021 Conf. North American Chapter of the Association for Computational Linguistics : Human Language Technologies, Seattle, United States, 2021, pp. 4725–4735.
[4]
Z. He, N. Mokhberian, and K. Lerman, Infusing knowledge from Wikipedia to enhance stance detection, in Proc. 12 th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis, Dublin, Ireland, 2022, pp. 71–77.
[5]
R. Liu, Z. Lin, Y. Tan, and W. Wang, Enhancing zero-shot and few-shot stance detection with commonsense knowledge graph, in Proc. Findings of the Association for Computational Linguistics : ACL-IJCNLP 2021, Virtual Event, 2021, pp. 3152–3157.
[6]
O. Agarwal, H. Ge, S. Shakeri, and R. Al-Rfou, Knowledge graph based synthetic corpus generation for knowledge-enhanced language model pre-training, in Proc. 2021 Conf. 9 th American Chapter of the Association for Computational Linguistics : Human Language Technologies, Virtual Event, 2021, pp. 3554–3565.
[7]
Y. Xu, C. Zhu, S. Wang, S. Sun, H. Cheng, X. Liu, J. Gao, P. He, M. Zeng, and X. Huang, Human parity on CommonsenseQA: Augmenting self-attention with external attention, in Proc. 31 st Int. Joint Conf. Artificial Intelligence, Vienna, Austria, 2022, pp. 2762–2768.
[8]
X. Wang, T. Gao, Z. Zhu, Z. Zhang, Z. Liu, J. Li, and J. Tang, KEPLER: A unified model for knowledge embedding and pre-trained language representation, Trans. Assoc. Comput. Linguist., vol. 9, pp. 176–194, 2021.
[9]
Y. Lin, Y. Meng, X. Sun, Q. Han, K. Kuang, J. Li, and F. Wu, BertGCN: Transductive text classification by combining GNN and BERT, in Proc. Findings of the Association for Computational Linguistics : ACL-IJCNLP 2021, Virtual Event, 2021, pp. 1456–1462.
[10]
J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. H. Chi, Q. V. Le, and D. Zhou, Chain-of-thought prompting elicits reasoning in large language models, in Proc. 36 th Int. Conf. Neural Information Processing Systems, New Orleans, LA, USA, 2022, p. 1800.
[11]
E. Allaway and K. McKeown, Zero-shot stance detection: A dataset and model using generalized topic representations, in Proc. 2020 Conf. Empirical Methods in Natural Language Processing (EMNLP ), Virtual Event, 2020, pp. 8913–8931.
[12]
C. Zhu, Y. Xu, X. Ren, B. Y. Lin, M. Jiang, and W. Yu, Knowledge-augmented methods for natural language processing, in Proc. 60 th Annu. Meeting of the Association for Computational Linguistics : Tutorial Abstracts, Dublin, Ireland, 2022, pp. 12–20.
[13]
W. Liu, P. Zhou, Z. Zhao, Z. Wang, Q. Ju, H. Deng, and P. Wang, K-BERT: Enabling language representation with knowledge graph, in Proc. 34 th AAAI Conf. Artificial Intelligence, New York, NY, USA, 2020, pp. 2901–2908.
[14]
D. Yu, C. Zhu, Y. Yang, and M. Zeng, JAKET: Joint pre-training of knowledge graph and language understanding, in Proc. 36 th AAAI Conf. Artificial Intelligence, Virtual Event, 2022, pp. 11630–11638.
[15]
Z. Zhang, X. Han, Z. Liu, X. Jiang, M. Sun, and Q. Liu, ERNIE: Enhanced language representation with informative entities, in Proc. 57 th Annu. Meeting of the Association for Computational Linguistics, Florence, Italy, 2019, pp. 1441–1451.
[16]
T. Févry, L. B. Soares, N. FitzGerald, E. Choi, and T. Kwiatkowski, Entities as experts: Sparse memory access with entity supervision, in Proc. 2020 Conf. Empirical Methods in Natural Language Processing (EMNLP ), Florence, Italy, 2020, pp. 4937–4951.
[17]
I. Beltagy, K. Lo, and A. Cohan, SciBERT: A pretrained language model for scientific text, in Proc. 2019 Conf. Empirical Methods in Natural Language Processing and the 9 th Int. Joint Conf. Natural Language Processing (EMNLP-IJCNLP ), Hong Kong, China, 2019, pp. 3615–3620.
[18]
K. R. Kanakarajan, S. Ramamoorthy, V. Archana, S. Chatterjee, and M. Sankarasubbu, Saama research at MEDIQA 2019: Pre-trained BioBERT with attention visualisation for medical natural language inference, in Proc. 18 th BioNLP Workshop and Shared Task, Florence, Italy, 2019, pp. 510–516.
[19]
D. Q. Nguyen, T. Vu, and A. T. Nguyen, BERTweet: A pre-trained language model for English tweets, in Proc. 2020 Conf. Empirical Methods in Natural Language Processing : System Demonstrations, Virtual Event, 2020, pp. 9–14.
[20]
V. Shwartz, P. West, R. Le Bras, C. Bhagavatula, and Y. Choi, Unsupervised commonsense question answering with self-talk, in Proc. 2020 Conf. Empirical Methods in Natural Language Processing (EMNLP ), 2020, pp. 4615–4629.
[21]
V. Karpukhin, B. Oguz, S. Min, P. Lewis, L. Wu, S. Edunov, D. Chen, and W. T. Yih, Dense passage retrieval for open-domain question answering, in Proc. 2020 Conf. Empirical Methods in Natural Language Processing (EMNLP ), Virtual Event, 2020, pp. 6769–6781.
[22]
Y. Yao, S. Huang, L. Dong, F. Wei, H. Chen, and N. Zhang, Kformer: Knowledge injection in transformer feed-forward layers, in Proc. 11 th CCF Int. Conf., Guilin, China, 2022, pp. 131–143.
[23]
D. Küçük and F. Can, Stance detection: A survey, ACM Comput. Surv., vol. 53, no. 1, p. 12, 2020.
[24]
A. ALDayel and W. Magdy, Stance detection on social media: State of the art and trends, Inf. Process. Manage., vol. 58, no. 4, p. 102597, 2021.
[25]
M. Hardalov, A. Arora, P. Nakov, and I. Augenstein, A survey on stance detection for mis- and disinformation identification, in Proc. Findings of the Association for Computational Linguistics : NAACL 2022, Seattle, DC, USA, 2022, pp. 1259–1277.
[26]
M. Mohtarami, J. Glass, and P. Nakov, Contrastive language adaptation for cross-lingual stance detection, in Proc. 2019 Conf. Empirical Methods in Natural Language Processing and the 9 th Int. Joint Conf. Natural Language Processing (EMNLP-IJCNLP ), Hong Kong, China, 2019, pp. 4442–4452.
[27]
E. Zotova, R. Agerri, M. Nuñez, and G. Rigau, Multilingual stance detection in tweets: The Catalonia independence corpus, in Proc. 12 th Language Resources and Evaluation Conf., Marseille, France, 2020, pp. 1368–1375.
[28]
Y. Luo, Z. Liu, Y. Shi, S. Z. Li, and Y. Zhang, Exploiting sentiment and common sense for zero-shot stance detection, in Proc. 29 th Int. Conf. Computational Linguistics, Gyeongju, Republic of Korea, 2022, pp. 7112–7123.
[29]
R. Liu, Z. Lin, H. Ji, J. Li, P. Fu, and W. Wang, Target really matters: Target-aware contrastive learning and consistency regularization for few-shot stance detection, in Proc. 29 th Int. Conf. Computational Linguistics, Gyeongju, Republic of Korea, 2022, pp. 6944–6954.
[30]
B. Liang, Q. Zhu, X. Li, M. Yang, L. Gui, Y. He, and R. Xu, JointCL: A joint contrastive learning framework for zero-shot stance detection, in Proc. 60 th Annu. Meeting of the Association for Computational Linguistics (Volume 1 : Long Papers ), Dublin, Ireland, 2022, pp. 81–91.
[31]
N. Reimers and I. Gurevych, Sentence-BERT: Sentence embeddings using Siamese BERT-networks, in Proc. 2019 Conf. Empirical Methods in Natural Language Processing and the 9 th Int. Joint Conf. Natural Language Processing (EMNLP-IJCNLP ), Hong Kong, China, 2019, pp. 3982–3992.
[33]
Y. Li, T. Sosea, A. Sawant, A. J. Nair, D. Inkpen, and C. Caragea, P-stance: A large dataset for stance detection in political domain, in Proc. Findings of the Association for Computational Linguistics : ACL-IJCNLP 2021, Virtual Event, 2021, pp. 2355–2365.
[34]
K. Glandt, S. Khanal, Y. J. Li, D. Caragea, and C. Caragea, Stance detection in COVID-19 tweets, in Proc. 59 th Annu. Meeting of the Association for Computational Linguistics and the 11 th Int. Joint Conf. Natural Language Processing (Volume 1 : Long Papers ), Bangkok, Thailand, 2021, pp. 1596–1611.
[35]
J. Du, R. Xu, Y. He, and L. Gui, Stance classification with target-specific neural attention, in Proc. 26 th Int. Joint Conf. Artificial Intelligence, Melbourne, Australia, 2017, pp. 3988–3994.
[36]
J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, BERT: Pre-training of deep bidirectional transformers for language understanding, in Proc. 2019 Conf. North American Chapter of the Association for Computational Linguistics : Human Language Technologies, Volume 1 (Long and Short Papers ), Minneapolis, MU, USA, 2019, pp. 4171–4186.
[37]
I. Augenstein, T. Rocktäschel, A. Vlachos, and K. Bontcheva, Stance detection with bidirectional conditional encoding, in Proc. 2016 Conf. Empirical Methods in Natural Language Processing, Austin, TX, USA, 2016, pp. 876–885.
[38]
W. Xue and T. Li, Aspect based sentiment analysis with gated convolutional networks, in Proc. 56 th Annu. Meeting of the Association for Computational Linguistics (Volume 1 : Long Papers ), Melbourne, Australia, 2018, pp. 2514–2523.
[39]
B. Huang and K. Carley, Parameterized convolutional neural networks for aspect level sentiment classification, in Proc. 2018 Conf. Empirical Methods in Natural Language Processing, Brussels, Belgium, 2018, pp. 1091–1096.
[40]
B. Zhang, M. Yang, X. Li, Y. Ye, X. Xu, and K. Dai, Enhancing cross-target stance detection with transferable semantic-emotion knowledge, in Proc. 58 th Annu. Meeting of the Association for Computational Linguistics, Virtual Event, 2020, pp. 3188–3197.
[41]
B. Zhang, X. Fu, D. Ding, H. Huang, Y. Li, and L. Jing, Investigating chain-of-thought with ChatGPT for stance detection on social media, arXiv preprint arXiv: 2304.03087, 2023.
[42]
J. Wei, D. Huang, Y. Lu, D. Zhou, and Q. V. Le, Simple synthetic data reduces sycophancy in large language models, arXiv preprint arXiv: 2308.03958, 2023.