Open Access Issue
False Negative Sample Detection for Graph Contrastive Learning
Tsinghua Science and Technology 2024, 29 (2): 529-542
Published: 22 September 2023

Recently, self-supervised learning has shown great potential in Graph Neural Networks (GNNs) through contrastive learning, which aims to learn discriminative features for each node without label information. The key to graph contrastive learning is data augmentation. The anchor node regards its augmented samples as positive samples, and the rest of the samples are regarded as negative samples, some of which may be positive samples. We call these mislabeled samples as "false negative" samples, which will seriously affect the final learning effect. Since such semantically similar samples are ubiquitous in the graph, the problem of false negative samples is very significant. To address this issue, the paper proposes a novel model, False negative sample Detection for Graph Contrastive Learning (FD4GCL), which uses attribute and structure-aware to detect false negative samples. Experimental results on seven datasets show that FD4GCL outperforms the state-of-the-art baselines and even exceeds several supervised methods.

total 1