Regular Paper Issue
Top-down Text-Level Discourse Rhetorical Structure Parsing with Bidirectional Representation Learning
Journal of Computer Science and Technology 2023, 38 (5): 985-1001
Published: 30 September 2023

Early studies on discourse rhetorical structure parsing mainly adopt bottom-up approaches, limiting the parsing process to local information. Although current top-down parsers can better capture global information and have achieved particular success, the importance of local and global information at various levels of discourse parsing is different. This paper argues that combining local and global information for discourse parsing is more sensible. To prove this, we introduce a top-down discourse parser with bidirectional representation learning capabilities. Existing corpora on Rhetorical Structure Theory (RST) are known to be much limited in size, which makes discourse parsing very challenging. To alleviate this problem, we leverage some boundary features and a data augmentation strategy to tap the potential of our parser. We use two methods for evaluation, and the experiments on the RST-DT corpus show that our parser can primarily improve the performance due to the effective combination of local and global information. The boundary features and the data augmentation strategy also play a role. Based on gold standard elementary discourse units (EDUs), our parser significantly advances the baseline systems in nuclearity detection, with the results on the other three indicators (span, relation, and full) being competitive. Based on automatically segmented EDUs, our parser still outperforms previous state-of-the-art work.

Regular Paper Issue
Neural Emotion Detection via Personal Attributes
Journal of Computer Science and Technology 2022, 37 (5): 1146-1160
Published: 30 September 2022

There has been a recent line of work to automatically detect the emotions of posts in social media. In literature, studies treat posts independently and detect their emotions separately. Different from previous studies, we explore the dependence among relevant posts via authors' backgrounds, since the authors with similar backgrounds, e.g., "gender", "location", tend to express similar emotions. However, personal attributes are not easy to obtain in most social media websites. Accordingly, we propose two approaches to determine personal attributes and capture personal attributes between different posts for emotion detection: the Joint Model with Personal Attention Mechanism (JPA) model is used to detect emotion and personal attributes jointly, and capture the attributes-aware words to connect similar people; the Neural Personal Discrimination (NPD) model is employed to determine the personal attributes from posts and connect the relevant posts with similar attributes for emotion detection. Experimental results show the usefulness of personal attributes in emotion detection, and the effectiveness of the proposed JPA and NPD approaches in capturing personal attributes over the state-of-the-art statistic and neural models.

Open Access Issue
CNN-Based Broad Learning for Cross-Domain Emotion Classification
Tsinghua Science and Technology 2023, 28 (2): 360-369
Published: 29 September 2022

Cross-domain emotion classification aims to leverage useful information in a source domain to help predict emotion polarity in a target domain in a unsupervised or semi-supervised manner. Due to the domain discrepancy, an emotion classifier trained on source domain may not work well on target domain. Many researchers have focused on traditional cross-domain sentiment classification, which is coarse-grained emotion classification. However, the problem of emotion classification for cross-domain is rarely involved. In this paper, we propose a method, called convolutional neural network (CNN) based broad learning, for cross-domain emotion classification by combining the strength of CNN and broad learning. We first utilized CNN to extract domain-invariant and domain-specific features simultaneously, so as to train two more efficient classifiers by employing broad learning. Then, to take advantage of these two classifiers, we designed a co-training model to boost together for them. Finally, we conducted comparative experiments on four datasets for verifying the effectiveness of our proposed method. The experimental results show that the proposed method can improve the performance of emotion classification more effectively than those baseline methods.

Regular Paper Issue
Document-Level Neural Machine Translation with Hierarchical Modeling of Global Context
Journal of Computer Science and Technology 2022, 37 (2): 295-308
Published: 31 March 2022

Document-level machine translation (MT) remains challenging due to its difficulty in efficiently using document-level global context for translation. In this paper, we propose a hierarchical model to learn the global context for document-level neural machine translation (NMT). This is done through a sentence encoder to capture intra-sentence dependencies and a document encoder to model document-level inter-sentence consistency and coherence. With this hierarchical architecture, we feedback the extracted document-level global context to each word in a top-down fashion to distinguish different translations of a word according to its specific surrounding context. Notably, we explore the effect of three popular attention functions during the information backward-distribution phase to take a deep look into the global context information distribution of our model. In addition, since large-scale in-domain document-level parallel corpora are usually unavailable, we use a two-step training strategy to take advantage of a large-scale corpus with out-of-domain parallel sentence pairs and a small-scale corpus with in-domain parallel document pairs to achieve the domain adaptability. Experimental results of our model on Chinese-English and English-German corpora significantly improve the Transformer baseline by 4.5 BLEU points on average which demonstrates the effectiveness of our proposed hierarchical model in document-level NMT.

Regular Paper Issue
Language Adaptation for Entity Relation Classification via Adversarial Neural Networks
Journal of Computer Science and Technology 2021, 36 (1): 207-220
Published: 05 January 2021

Entity relation classification aims to classify the semantic relationship between two marked entities in a given sentence, and plays a vital role in various natural language processing applications. However, existing studies focus on exploiting mono-lingual data in English, due to the lack of labeled data in other languages. How to effectively benefit from a richly-labeled language to help a poorly-labeled language is still an open problem. In this paper, we come up with a language adaptation framework for cross-lingual entity relation classification. The basic idea is to employ adversarial neural networks (AdvNN) to transfer feature representations from one language to another. Especially, such a language adaptation framework enables feature imitation via the competition between a sentence encoder and a rival language discriminator to generate effective representations. To verify the effectiveness of AdvNN, we introduce two kinds of adversarial structures, dual-channel AdvNN and single-channel AdvNN. Experimental results on the ACE 2005 multilingual training corpus show that our single-channel AdvNN achieves the best performance on both unsupervised and semi-supervised scenarios, yield- ing an improvement of 6.61% and 2.98% over the state-of-the-art, respectively. Compared with baselines which directly adopt a machine translation module, we find that both dual-channel and single-channel AdvNN significantly improve the performances (F1) of cross-lingual entity relation classification. Moreover, extensive analysis and discussion demonstrate the appropriateness and effectiveness of different parameter settings in our language adaptation framework.

Regular Paper Issue
Word-Pair Relevance Modeling with Multi-View Neural Attention Mechanism for Sentence Alignment
Journal of Computer Science and Technology 2020, 35 (3): 617-628
Published: 29 May 2020

Sentence alignment provides multi-lingual or cross-lingual natural language processing (NLP) applications with high-quality parallel sentence pairs. Normally, an aligned sentence pair contains multiple aligned words, which intuitively play different roles during sentence alignment. Inspired by this intuition, we propose to deal with the problem of sentence alignment by exploring the semantic interactionship among fine-grained word pairs within the framework of neural network. In particular, we first employ various relevance measures to capture various kinds of semantic interactions among word pairs by using a word-pair relevance network, and then model their importance by using a multi-view attention network. Experimental results on both monotonic and non-monotonic bitexts show that our proposed approach significantly improves the performance of sentence alignment.

total 6