Sort:
Open Access Issue
Deep Broad Learning for Emotion Classification in Textual Conversations
Tsinghua Science and Technology 2024, 29 (2): 481-491
Published: 22 September 2023
Downloads:84

Emotion classification in textual conversations focuses on classifying the emotion of each utterance from textual conversations. It is becoming one of the most important tasks for natural language processing in recent years. However, it is a challenging task for machines to conduct emotion classification in textual conversations because emotions rely heavily on textual context. To address the challenge, we propose a method to classify emotion in textual conversations, by integrating the advantages of deep learning and broad learning, namely DBL. It aims to provide a more effective solution to capture local contextual information (i.e., utterance-level) in an utterance, as well as global contextual information (i.e., speaker-level) in a conversation, based on Convolutional Neural Network (CNN), Bidirectional Long Short-Term Memory (Bi-LSTM), and broad learning. Extensive experiments have been conducted on three public textual conversation datasets, which show that the context in both utterance-level and speaker-level is consistently beneficial to the performance of emotion classification. In addition, the results show that our proposed method outperforms the baseline methods on most of the testing datasets in weighted-average F1.

Open Access Issue
CNN-Based Broad Learning for Cross-Domain Emotion Classification
Tsinghua Science and Technology 2023, 28 (2): 360-369
Published: 29 September 2022
Downloads:57

Cross-domain emotion classification aims to leverage useful information in a source domain to help predict emotion polarity in a target domain in a unsupervised or semi-supervised manner. Due to the domain discrepancy, an emotion classifier trained on source domain may not work well on target domain. Many researchers have focused on traditional cross-domain sentiment classification, which is coarse-grained emotion classification. However, the problem of emotion classification for cross-domain is rarely involved. In this paper, we propose a method, called convolutional neural network (CNN) based broad learning, for cross-domain emotion classification by combining the strength of CNN and broad learning. We first utilized CNN to extract domain-invariant and domain-specific features simultaneously, so as to train two more efficient classifiers by employing broad learning. Then, to take advantage of these two classifiers, we designed a co-training model to boost together for them. Finally, we conducted comparative experiments on four datasets for verifying the effectiveness of our proposed method. The experimental results show that the proposed method can improve the performance of emotion classification more effectively than those baseline methods.

Open Access Issue
p-Norm Broad Learning for Negative Emotion Classification in Social Networks
Big Data Mining and Analytics 2022, 5 (3): 245-256
Published: 09 June 2022
Downloads:208

Negative emotion classification refers to the automatic classification of negative emotion of texts in social networks. Most existing methods are based on deep learning models, facing challenges such as complex structures and too many hyperparameters. To meet these challenges, in this paper, we propose a method for negative emotion classification utilizing a Robustly Optimized BERT Pretraining Approach (RoBERTa) and p-norm Broad Learning ( p-BL). Specifically, there are mainly three contributions in this paper. Firstly, we fine-tune the RoBERTa to adapt it to the task of negative emotion classification. Then, we employ the fine-tuned RoBERTa to extract features of original texts and generate sentence vectors. Secondly, we adopt p-BL to construct a classifier and then predict negative emotions of texts using the classifier. Compared with deep learning models, p-BL has advantages such as a simple structure that is only 3-layer and fewer parameters to be trained. Moreover, it can suppress the adverse effects of more outliers and noise in data by flexibly changing the value of p. Thirdly, we conduct extensive experiments on the public datasets, and the experimental results show that our proposed method outperforms the baseline methods on the tested datasets.

total 3