AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (1.6 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Deep Broad Learning for Emotion Classification in Textual Conversations

Center for Linguistics and Applied Linguistics, and Laboratory of Language Engineering and Computing, Guangdong University of Foreign Studies, Guangzhou 510006, China
Guangdong Provincial Key Laboratory of Nanophotonic Functional Materials and Devices, South China Normal University, Guangzhou 510006, China
School of Computer Science and Cyber Engineering, Guangzhou University, Guangzhou 510006, China
Modern Education Technology Center, Guangdong University of Foreign Studies, Guangzhou 510006, China
Show Author Information

Abstract

Emotion classification in textual conversations focuses on classifying the emotion of each utterance from textual conversations. It is becoming one of the most important tasks for natural language processing in recent years. However, it is a challenging task for machines to conduct emotion classification in textual conversations because emotions rely heavily on textual context. To address the challenge, we propose a method to classify emotion in textual conversations, by integrating the advantages of deep learning and broad learning, namely DBL. It aims to provide a more effective solution to capture local contextual information (i.e., utterance-level) in an utterance, as well as global contextual information (i.e., speaker-level) in a conversation, based on Convolutional Neural Network (CNN), Bidirectional Long Short-Term Memory (Bi-LSTM), and broad learning. Extensive experiments have been conducted on three public textual conversation datasets, which show that the context in both utterance-level and speaker-level is consistently beneficial to the performance of emotion classification. In addition, the results show that our proposed method outperforms the baseline methods on most of the testing datasets in weighted-average F1.

References

[1]
L. Zhou, J. Gao, D. Li, and H. Shum, The design and implementation of XiaoIce, an empathetic social chatbot, Comput. Linguist., vol. 46, no. 1, pp. 5393, 2020.
[2]
S. Peng, L. Cao, Y. Zhou, Z. Ouyang, A. Yang, X. Li, W. Jia, and S. Yu, A survey on deep learning for textual emotion analysis in social networks, Dig. Commun. Netw., vol. 8, no. 5, pp. 745762, 2022.
[3]
H. Ma, J. Wang, H. Lin, X. Pan, Y. Zhang, and Z. Yang, A multi-view network for real-time emotion recognition in conversations, Knowl.-Based Syst., vol. 236, p. 107751, 2022.
[4]
W. Jiao, H. Yang, I. King, and M. R. Lyu, HiGRU: Hierarchical gated recurrent units for utterance-level emotion recognition, in Proc. 2019 Conf. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2019, pp. 397406.
[5]
N. Majumder, S. Poria, D. Hazarika, R. Mihalcea, A. Gelbukh, and E. Cambria, DialogueRNN: An attentive RNN for emotion detection in conversations, in Proc. Thirty-Third AAAI Conf. on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conf. and Ninth AAAI Symp. on Educational Advances in Artificial Intelligence, Honolulu, HI, USA, 2019, pp. 68186825.
[6]
D. Ghosal, N. Majumder, A. Gelbukh, R. Mihalcea, and S. Poria, COSMIC: Commonsense knowledge for emotion identification in conversations, in Proc. Findings of the Association for Computational Linguistics: EMNLP 2020, Virtual Event, 2020, pp. 24702481.
[7]
D. Hu, L. Wei, and X. Huai, DialogueCRN: Contextual reasoning networks for emotion recognition in conversations, in Proc. 59th Annu. Meeting of the Association for Computational Linguistics and the 11th Int. Joint Conf. on Natural Language Processing (Volume 1: Long Papers), Virtual Event, 2021, pp. 70427052.
[8]
D. Li, Y. Li, and S. Wang, Interactive double states emotion cell model for textual dialogue emotion prediction, Knowl.-Based Syst., vol. 189, p. 105084, 2020.
[9]
J. Li, D. Ji, F. Li, M. Zhang, and Y. Liu, HiTrans: A transformer-based context- and speaker-sensitive model for emotion detection in conversations, in Proc. 28th Int. Conf. on Computational Linguistics, Barcelona, Spain, 2020, pp. 41904200.
[10]
X. Lu, Y. Zhao, Y. Wu, Y. Tian, H. Chen, and B. Qin, An iterative emotion interaction network for emotion recognition in conversations, in Proc. 28th Int. Conf. on Computational Linguistics, Barcelona, Spain, 2020, pp. 40784088.
[11]
L. Zhu, G. Pergola, L. Gui, D. Zhou, and Y. He, Topic-driven and knowledge-aware transformer for dialogue emotion detection, in Proc. 59th Annu. Meeting of the Association for Computational Linguistics and the 11th Int. Joint Conf. on Natural Language Processing (Volume 1: Long Papers), Virtual Event, 2021, pp. 15711582.
[12]
W. Shen, J. Chen, X. Quan, and Z. Xie, DialogXL: All-in- one XLNet for multi-party conversation emotion recognition, Proc. AAAI Conf. Artif. Intell., vol. 35, no. 15, pp. 1378913797, 2021.
[13]
L. Liu, Z. Zhang, H. Zhao, X. Zhou, and X. Zhou, Filling the gap of utterance-aware and speaker-aware representation for multi-turn dialogue, Proc. AAAI Conf. Artif. Intell., vol. 35, no. 15, pp. 1340613414, 2021.
[14]
D. Ghosal, N. Majumder, S. Poria, N. Chhaya, and A. Gelbukh, DialogueGCN: A graph convolutional neural network for emotion recognition in conversation, in Proc. Conf. on Empirical Methods in Natural Language Processing and the 9th Int. Joint Conf. on Natural Language Processing, Hong Kong, China, 2019, pp. 154164.
[15]
T. Ishiwatari, Y. Yasuda, T. Miyazaki, and J. Goto, Relation-aware graph attention networks with relational position encodings for emotion recognition in conversations, in Proc. Conf. on Empirical Methods in Natural Language Processing, Virtual Event, 2020, pp. 73607370.
[16]
D. Zhang, L. Wu, C. Sun, S. Li, Q. Zhu, and G. Zhou, Modeling both context- and speaker-sensitive dependence for emotion detection in multi-speaker conversations, in Proc. Twenty-Eighth Int. Joint Conf. on Artificial Intelligence, Macao, China, 2019, pp. 54155421.
[17]
P. Zhong, D. Wang, and C. Miao, Knowledge-enriched transformer for emotion detection in textual conversations, in Proc. Conf. on Empirical Methods in Natural Language Processing and the 9th Int. Joint Conf. on Natural Language Processing, Hong Kong, China, 2019, pp. 165176.
[18]
W. Shen, S. Wu, Y. Yang, and X. Quan, Directed acyclic graph network for conversational emotion recognition, in Proc. 59th Annu. Meeting of the Association for Computational Linguistics and the 11th Int. Joint Conf. on Natural Language Processing (Volume 1: Long Papers), Virtual Event, 2021, pp. 15511560.
[19]
Y. Liang, F. Meng, Y. Zhang, J. Xu, Y. Chen, and J. Zhou, Infusing multi-source knowledge with heterogeneous graph neural network for emotional conversation generation, in Proc. AAAI Conf. Artif. Intell., vol. 35, no. 15, pp. 1334313352, 2021.
[20]
J. Hu, Y. Liu, J. Zhao, and Q. Jin, MMGCN: Multimodal fusion via deep graph convolution network for emotion recognition in conversation, in Proc. 59th Annu. Meeting of the Association for Computational Linguistics and the 11th Int. Joint Conf. on Natural Language Processing (Volume 1: Long Papers), Virtual Event, 2021, pp. 56665675.
[21]
M. Schuster and K. K. Paliwal, Bidirectional recurrent neural networks, IEEE Trans. Signal Process., vol. 45, no. 11, pp. 26732681, 1997.
[22]
C. L. Giles, G. M. Kuhn, and R. J. Williams, Dynamic recurrent neural networks: Theory and applications, IEEE Trans. Neural Netw., vol. 5, no. 2, pp. 153156, 1994.
[23]
K. Cho, B. van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, Learning phrase representations using RNN encoder-decoder for statistical machine translation, in Proc. Conf. on Empirical Methods in Natural Language Processing, Doha, Qatar, 2014, pp. 17241734.
[24]
X. Huang, Introduction to Psychology. Beijing, China: People’s Education Press, 1991.
[25]
P. Ekman, An argument for basic emotions, Cogn. Emot., vol. 6, nos. 3&4, pp. 169200, 1992.
[26]
A. Ben-Zeev, The nature of emotions, Philos. Stud., vol. 52, no. 3, pp. 393409, 1987.
[27]
S. Peng, R. Zeng, H. Liu, G. Chen, R. Wu, A. Yang, and S. Yu, Emotion classification of text based on BERT and broad learning system, in Proc. Asia-Pacific Web (APWeb) and Web-Age Information Management (WAIM) Joint Int. Conf. on Web and Big Data, Guangzhou, China, 2021, pp. 382396.
[28]
Z. Ahmad, R. Jindal, A. Ekbal, and P. Bhattachharyya, Borrow from rich cousin: Transfer learning for emotion detection using cross lingual embedding, Expert Syst. Appl., vol. 139, p. 112851, 2020.
[29]
F. Barbieri, J. Camacho-Collados, F. Ronzano, L. Espinosa-Anke, M. Ballesteros, V. Basile, V. Patti, and H. Saggion, SemEval 2018 task 2: Multilingual emoji prediction, in Proc. 12th Int. Workshop on Semantic Evaluation, New Orleans, LA, USA, 2018, pp. 2433.
[30]
C. L. P. Chen and Z. Liu, Broad learning system: An effective and efficient incremental learning system without the need for deep architecture, IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 1, pp. 1024, 2018.
[31]
C. L. P. Chen, Z. Liu, and S. Feng, Universal approximation capability of broad learning system and its structural variations, IEEE Trans. Neural Netw. Learn. Syst., vol. 30, no. 4, pp. 11911204, 2019.
[32]
R. Zeng, H. Liu, S. Peng, L. Cao, A. Yang, C. Zong, and G. Zhou, CNN-based broad learning for cross-domain emotion classification, Tsinghua Science and Technology, vol. 28, no. 2, pp. 360369, 2023.
[33]
S. Peng, R. Zeng, L. Cao, A. Yang, J. Niu, C. Zong, and G. Zhou, Multi-source domain adaptation method for textual emotion classification using deep and broad learning, Knowl.-Based Syst., vol. 260, p. 110173, 2023.
[34]
G. Chen, S. Peng, R. Zeng, Z. Hu, L. Cao, Y. Zhou, Z. Ouyang, and X. Nie, p-norm broad learning for negative emotion classification in social networks, Big Data Mining and Analytics, vol. 5, no. 3, pp. 245256, 2022.
[35]
Y. Cui, W. Che, T. Liu, B. Qin, and Z. Yang, Pre-training with whole word masking for Chinese BERT, IEEE/ACM Trans. Audio, Speech, Lang. Process., vol. 29, pp. 35043514, 2021.
[36]
S. Poria, D. Hazarika, N. Majumder, G. Naik, E. Cambria, and R. Mihalcea, MELD: A multimodal multi-party dataset for emotion recognition in conversations, in Proc. 57th Annu. Meeting of the Association for Computational Linguistics, Florence, Italy, 2019, pp. 527536.
[37]
S. M. Zahiri and J. D. Choi, Emotion detection on TV show transcripts with sequence-based convolutional neural networks, in Proc. Workshops of the Thirty-Second AAAI Conf. on Artificial Intelligence, New Orleans, LA, USA, 2018, pp. 4452.
[38]
C. Busso, M. Bulut, C. C. Lee, A. Kazemzadeh, E. Mower, S. Kim, J. N. Chang, S. Lee, and S. S. Narayanan, IEMOCAP: Interactive emotional dyadic motion capture database, Lang. Resour. Eval., vol. 42, no. 4, pp. 335359, 2008.
[39]
I. Loshchilov and F. Hutter, Fixing weight decay regularization in Adam, in Proc. of Int. Conf. on Learning Representations, https://openreview.net/forum?id=rk6qdGgCZ, 2018.
[40]
Y. Kim, Convolutional neural networks for sentence classification, in Proc. Conf. on Empirical Methods in Natural Language Processing, Doha, Qatar, 2014, pp. 17461751.
[41]
Q. Li, D. Gkoumas, A. Sordoni, J. Y. Nie, and M. Melucci, Quantum-inspired neural network for conversational emotion recognition, in Proc. AAAI Conf. Artif. Intell., vol. 35, no. 15, pp. 1327013278, 2021.
[42]
D. Zhang, X. Chen, S. Xu, and B. Xu, Knowledge aware emotion recognition in textual conversations via multi-task incremental transformer, in Proc. 28th Int. Conf. on Computational Linguistics, Barcelona, Spain, 2020, pp. 44294440.
Tsinghua Science and Technology
Pages 481-491
Cite this article:
Peng S, Zeng R, Liu H, et al. Deep Broad Learning for Emotion Classification in Textual Conversations. Tsinghua Science and Technology, 2024, 29(2): 481-491. https://doi.org/10.26599/TST.2023.9010021

852

Views

146

Downloads

7

Crossref

3

Web of Science

3

Scopus

0

CSCD

Altmetrics

Received: 22 November 2022
Revised: 18 February 2023
Accepted: 19 March 2023
Published: 22 September 2023
© The author(s) 2024.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return