As a highly vascular eye part, the choroid is crucial in various eye disease diagnoses. However, limited research has focused on the inner structure of the choroid due to the challenges in obtaining sufficient accurate label data, particularly for the choroidal vessels. Meanwhile, the existing direct choroidal vessel segmentation methods for the intelligent diagnosis of vascular assisted ophthalmic diseases are still unsatisfactory due to noise data, while the synergistic segmentation methods compromise vessel segmentation performance for the choroid layer segmentation tasks. Common cascaded structures grapple with error propagation during training. To address these challenges, we propose a cascade learning segmentation method for the inner vessel structures of the choroid in this paper. Specifically, we propose Transformer-Assisted Cascade Learning Network (TACLNet) for choroidal vessel segmentation, which comprises a two-stage training strategy: pre-training for choroid layer segmentation and joint training for choroid layer and choroidal vessel segmentation. We also enhance the skip connection structures by introducing a multi-scale subtraction connection module designated as MSC, capturing differential and detailed information simultaneously. Additionally, we implement an auxiliary Transformer branch named ATB to integrate global features into the segmentation process. Experimental results exhibit that our method achieves the state-of-the-art performance for choroidal vessel segmentation. Besides, we further validate the significant superiority of the proposed method for retinal fluid segmentation in optical coherence tomography (OCT) scans on a publicly available dataset. All these fully prove that our TACLNet contributes to the advancement of choroidal vessel segmentation and is of great significance for ophthalmic research and clinical application.
- Article type
- Year
- Co-author
Texture provides an important cue for many computer vision applications, and texture image classification has been an active research area over the past years. Recently, deep learning techniques using convolutional neural networks (CNN) have emerged as the state-of-the-art: CNN-based features provide a significant performance improvement over previous handcrafted features. In this study, we demonstrate that we can further improve the discriminative power of CNN-based features and achieve more accurate classification of texture images. In particular, we have designed a discriminative neural network-based feature transformation (NFT) method, with which the CNN-based features are transformed to lower dimensionality descriptors based on an ensemble of neural networks optimized for the classification objective. For evaluation, we used three standard benchmark datasets (KTH-TIPS2, FMD, and DTD) for texture image classification. Our experimental results show enhanced classification performance over the state-of-the-art.