Sort:
Regular Paper Issue
Top-down Text-Level Discourse Rhetorical Structure Parsing with Bidirectional Representation Learning
Journal of Computer Science and Technology 2023, 38 (5): 985-1001
Published: 30 September 2023

Early studies on discourse rhetorical structure parsing mainly adopt bottom-up approaches, limiting the parsing process to local information. Although current top-down parsers can better capture global information and have achieved particular success, the importance of local and global information at various levels of discourse parsing is different. This paper argues that combining local and global information for discourse parsing is more sensible. To prove this, we introduce a top-down discourse parser with bidirectional representation learning capabilities. Existing corpora on Rhetorical Structure Theory (RST) are known to be much limited in size, which makes discourse parsing very challenging. To alleviate this problem, we leverage some boundary features and a data augmentation strategy to tap the potential of our parser. We use two methods for evaluation, and the experiments on the RST-DT corpus show that our parser can primarily improve the performance due to the effective combination of local and global information. The boundary features and the data augmentation strategy also play a role. Based on gold standard elementary discourse units (EDUs), our parser significantly advances the baseline systems in nuclearity detection, with the results on the other three indicators (span, relation, and full) being competitive. Based on automatically segmented EDUs, our parser still outperforms previous state-of-the-art work.

Regular Paper Issue
Augmenting Trigger Semantics to Improve Event Coreference Resolution
Journal of Computer Science and Technology 2023, 38 (3): 600-611
Published: 30 May 2023

Due to the small size of the annotated corpora and the sparsity of the event trigger words, the event coreference resolver cannot capture enough event semantics, especially the trigger semantics, to identify coreferential event mentions. To address the above issues, this paper proposes a trigger semantics augmentation mechanism to boost event coreference resolution. First, this mechanism performs a trigger-oriented masking strategy to pre-train a BERT (Bidirectional Encoder Representations from Transformers)-based encoder (Trigger-BERT), which is fine-tuned on a large-scale unlabeled dataset Gigaword. Second, it combines the event semantic relations from the Trigger-BERT encoder with the event interactions from the soft-attention mechanism to resolve event coreference. Experimental results on both the KBP2016 and KBP2017 datasets show that our proposed model outperforms several state-of-the-art baselines.

total 2