Sort:
Open Access Issue
Deep Broad Learning for Emotion Classification in Textual Conversations
Tsinghua Science and Technology 2024, 29 (2): 481-491
Published: 22 September 2023
Downloads:83

Emotion classification in textual conversations focuses on classifying the emotion of each utterance from textual conversations. It is becoming one of the most important tasks for natural language processing in recent years. However, it is a challenging task for machines to conduct emotion classification in textual conversations because emotions rely heavily on textual context. To address the challenge, we propose a method to classify emotion in textual conversations, by integrating the advantages of deep learning and broad learning, namely DBL. It aims to provide a more effective solution to capture local contextual information (i.e., utterance-level) in an utterance, as well as global contextual information (i.e., speaker-level) in a conversation, based on Convolutional Neural Network (CNN), Bidirectional Long Short-Term Memory (Bi-LSTM), and broad learning. Extensive experiments have been conducted on three public textual conversation datasets, which show that the context in both utterance-level and speaker-level is consistently beneficial to the performance of emotion classification. In addition, the results show that our proposed method outperforms the baseline methods on most of the testing datasets in weighted-average F1.

Open Access Issue
CNN-Based Broad Learning for Cross-Domain Emotion Classification
Tsinghua Science and Technology 2023, 28 (2): 360-369
Published: 29 September 2022
Downloads:57

Cross-domain emotion classification aims to leverage useful information in a source domain to help predict emotion polarity in a target domain in a unsupervised or semi-supervised manner. Due to the domain discrepancy, an emotion classifier trained on source domain may not work well on target domain. Many researchers have focused on traditional cross-domain sentiment classification, which is coarse-grained emotion classification. However, the problem of emotion classification for cross-domain is rarely involved. In this paper, we propose a method, called convolutional neural network (CNN) based broad learning, for cross-domain emotion classification by combining the strength of CNN and broad learning. We first utilized CNN to extract domain-invariant and domain-specific features simultaneously, so as to train two more efficient classifiers by employing broad learning. Then, to take advantage of these two classifiers, we designed a co-training model to boost together for them. Finally, we conducted comparative experiments on four datasets for verifying the effectiveness of our proposed method. The experimental results show that the proposed method can improve the performance of emotion classification more effectively than those baseline methods.

Open Access Issue
p-Norm Broad Learning for Negative Emotion Classification in Social Networks
Big Data Mining and Analytics 2022, 5 (3): 245-256
Published: 09 June 2022
Downloads:204

Negative emotion classification refers to the automatic classification of negative emotion of texts in social networks. Most existing methods are based on deep learning models, facing challenges such as complex structures and too many hyperparameters. To meet these challenges, in this paper, we propose a method for negative emotion classification utilizing a Robustly Optimized BERT Pretraining Approach (RoBERTa) and p-norm Broad Learning ( p-BL). Specifically, there are mainly three contributions in this paper. Firstly, we fine-tune the RoBERTa to adapt it to the task of negative emotion classification. Then, we employ the fine-tuned RoBERTa to extract features of original texts and generate sentence vectors. Secondly, we adopt p-BL to construct a classifier and then predict negative emotions of texts using the classifier. Compared with deep learning models, p-BL has advantages such as a simple structure that is only 3-layer and fewer parameters to be trained. Moreover, it can suppress the adverse effects of more outliers and noise in data by flexibly changing the value of p. Thirdly, we conduct extensive experiments on the public datasets, and the experimental results show that our proposed method outperforms the baseline methods on the tested datasets.

Open Access Issue
An Energy-Efficient Data Collection Scheme Using Denoising Autoencoder in Wireless Sensor Networks
Tsinghua Science and Technology 2019, 24 (1): 86-96
Published: 08 November 2018
Downloads:15

As one of the key operations in Wireless Sensor Networks (WSNs), the energy-efficient data collection schemes have been actively explored in the literature. However, the transform basis for sparsifing the sensed data is usually chosen empirically, and the transformed results are not always the sparsest. In this paper, we propose a Data Collection scheme based on Denoising Autoencoder (DCDA) to solve the above problem. In the data training phase, a Denoising AutoEncoder (DAE) is trained to compute the data measurement matrix and the data reconstruction matrix using the historical sensed data. Then, in the data collection phase, the sensed data of whole network are collected along a data collection tree. The data measurement matrix is utilized to compress the sensed data in each sensor node, and the data reconstruction matrix is utilized to reconstruct the original data in the sink. Finally, the data communication performance and data reconstruction performance of the proposed scheme are evaluated and compared with those of existing schemes using real-world sensed data. The experimental results show that compared to its counterparts, the proposed scheme results in a higher data compression rate, lower energy consumption, more accurate data reconstruction, and faster data reconstruction speed.

total 4