Abstract
Deep learning offers notable promise for computational pathology, but its performance is constrained by the need for extensively annotated datasets, which are costly and laborious to produce. Self-supervised learning (SSL) provides an effective paradigm for learning discriminative representations from unannotated pathological images. However, existing SSL methods often overlook domain-specific characteristics of pathological images and suffer from the adverse effects of low-quality negative samples, leading to sub-optimal feature representations for downstream tasks. To overcome these limitations, we propose DSSCL, a Domain- Specific Self-Supervised Contrastive Learning framework, which incorporates two novel components: (1) a Stain-Separation Based Data Augmentation module that enhances stain-aware representation learning by fusing stain-separated components with original hematoxylin and eosin images, and (2) a Contrast-Aware Pair Refinement module that improves feature discriminability by filtering potential positives and mining hard negatives, thereby mitigating the influence of low-quality negatives. Extensive experiments demonstrate that DSSCL achieves comparable accuracy in classification tasks using only 0.1% labeled data compared to a network fine-tuned from ImageNet with 10% labeled data, while also delivering competitive performance in detection and segmentation tasks, underscoring its effectiveness in learning transferable and robust feature representations across diverse downstream tasks. The code is available at https://github.com/ junjianli106/DSSCL.
京公网安备11010802044758号
Comments on this article