AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (35.4 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Honeycomb lung segmentation network based on P2T with CNN two-branch parallelism

Zhichao Li1Gang Li1( )Ling Zhang1Guijuan Cheng1Shan Wu2
College of Software, Taiyuan University of Technology, Taiyuan 030024, China
CT Div, Shanxi Bethune Hospital, Taiyuan 030024, China
Show Author Information

Abstract

Aiming at the problem that honeycomb lung lesions are difficult to accurately segment due to diverse morphology and complex distribution, a network with parallel two-branch structure is proposed. In the encoder, the Pyramid Pooling Transformer (P2T) backbone is used as the Transformer branch to obtain the global features of the lesions, the convolutional branch is used to extract the lesions’ local feature information, and the feature fusion module is designed to effectively fuse the features in the dual branches; subsequently, in the decoder, the channel prior convolutional attention is used to enhance the localization ability of the model to the lesion region. To resolve the problem of model accuracy degradation caused by the class imbalance of the dataset, an adaptive weighted hybrid loss function is designed for model training. Finally, extensive experimental results show that the method in this paper performs well on the Honeycomb Lung Dataset, with Intersection over Union (IoU), mean Intersection over Union (mIoU), Dice coefficient, and Precision (Pre) of 0.8750, 0.9363, 0.9298, and 0.9012, respectively, which are better than other methods. In addition, its IoU and Dice coefficient of 0.7941 and 0.8875 on the Covid dataset further prove its excellent performance.

References

[1]

N. Su and L. E, Multimodality imaging assessment of interstitial lung disease, Chin. J. Radiol, vol. 56, no. 11, pp. 1271–1275, 2022.

[2]

G. Li, H. Zhang, E. Linning, L. Zhang, Y. Li, and J. Zhao, Recognition of honeycomb lung in CT images based on improved MobileNet model, Med. Phys., vol. 48, no. 8, pp. 4304–4315, 2021.

[3]
J. Wei, G. Li, K. He, P. Li, L. Zhang, and R. Wang, MCSC-UTNet: Honeycomb lung segmentation algorithm based on Separable Vision Transformer and context feature fusion, in Proc. 2023 2nd Asia Conf. Algorithms, Computing and Machine Learning, Shanghai, China, 2023, pp. 488–494.
[4]

G. Li, J. Xie, L. Zhang, M. Sun, Z. Li, and Y. Sun, MCAFNet: multiscale cross-layer attention fusion network for honeycomb lung lesion segmentation, Med. Biol. Eng. Comput., vol. 62, no. 4, pp. 1121–1137, 2024.

[5]

Z. A. Zhao, X. F. Feng, X. Q. Ren, and Y. Y. Dong, Uncertainty-guided cross learning via CNN and transformer for semi-supervised honeycomb lung lesion segmentation, Phys. Med. Biol., vol. 68, no. 24, p. 245010, 2023.

[6]

J. Wang, G. Liu, D. Liu, and B. Chang, MF-Net: Multiple-feature extraction network for breast lesion segmentation in ultrasound images, Expert Syst. Appl., vol. 249, p. 123798, 2024.

[7]

Q. Du Nguyen and H. T. Thai, Crack segmentation of imbalanced data: The role of loss functions, Eng. Struct., vol. 297, p. 116988, 2023.

[8]

Z. Mohammadi, A. Aghaei, and M. E. Moghaddam, CycleFormer: Brain tissue segmentation in the presence of multiple sclerosis lesions and intensity non-uniformity artifact, Biomed. Signal Process. Contr., vol. 93, p. 106153, 2024.

[9]

R. Yang and Y. Yu, Artificial convolutional neural network in object detection and semantic segmentation for medical imaging analysis, Front. Oncol., vol. 11, p. 638182, 2021.

[10]

T. Sanida and M. Dasygenis, A novel lightweight CNN for chest X-ray-based lung disease identification on heterogeneous embedded system, Appl. Intell., vol. 54, no. 6, pp. 4756–4780, 2024.

[11]

A. Annavarapu and S. Borra, An adaptive watershed segmentation based medical image denoising using deep convolutional neural networks, Biomed. Signal Process. Contr., vol. 93, p. 106119, 2024.

[12]

S. Rajeashwari and K. Arunesh, Enhancing pneumonia diagnosis with ensemble-modified classifier and transfer learning in deep-CNN based classification of chest radiographs, Biomed. Signal Process. Contr., vol. 93, p. 106130, 2024.

[13]
O. Ronneberger, P. Fischer, and T. Brox, U-net: Convolutional networks for biomedical image segmentation, in Proc. 18th Int. Conf. Medical Image Computing and Computer Assisted Intervention (MICCAI 2015), Munich, Germany, 2015, pp. 234–241,
[14]

M. Xu, Q. Ma, H. Zhang, D. Kong, and T. Zeng, MEF-UNet: An end-to-end ultrasound image segmentation algorithm based on multi-scale feature extraction and fusion, Comput. Med. Imag. Graph., vol. 114, p. 102370, 2024.

[15]

A. Iqbal and M. Sharif, PDF-UNet: A semi-supervised method for segmentation of breast tumor images using a U-shaped pyramid-dilated network, Expert Syst. Appl., vol. 221, p. 119718, 2023.

[16]

M. Z. Alom, C. Yakopcic, M. Hasan, T. M. Taha, and V. K. Asari, Recurrent residual U-Net for medical image segmentation, J. Med. Imag., vol. 6, no. 1, p. 1, 2019.

[17]
O. Oktay, J. Schlemper, L. L. Folgoc, M. Lee, M. Heinrich, K. Misawa, K. Mori, S. McDonagh, N. Y. Hammerla, B. Kainz, et al., Attention U-Net: Learning where to look for the pancreas, arXiv preprint arXiv: 1804.03999, 2018.
[18]

Z. Han, M. Jian, and G. G. Wang, ConvUNeXt: An efficient convolution neural network for medical image segmentation, Knowl.-Based Syst., vol. 253, p. 109512, 2022.

[19]

Y. Xia, W. Liu, D. Yang, H. Wang, H. Wang, and M. Jiang, Early stage tumor segmentation in breast MRI using shape enhanced U-Net, Biomed. Signal Process. Contr., vol. 93, p. 106198, 2024.

[20]

P. Li, G. Li, Y. He, L. Zhang, Y. Sun, and F. Guo, A DAC-CLGD-Danet network based method for defaced image segmentation, Intelligent and Converged Networks, vol. 3, no. 3, pp. 294–308, 2022.

[21]
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. Gomez, Ł. Kaiser, and I. Polosukhin, Attention is all you need, in Proc. 31st Conf. on Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 6000–6010.
[22]

D. Li, C. Shi, J. Zhao, Y. Liu, and C. Li, Intra-patient and inter-patient multi-classification of severe cardiovascular diseases based on CResFormer, Tsinghua Science and Technology, vol. 28, no. 2, pp. 386–404, 2023.

[23]

Y. Zhang, Z. Shi, H. Wang, S. Cui, L. Zhang, J. Liu, X. Shan, Y. Liu, and L. Fang, LumVertCancNet: A novel 3D lumbar vertebral body cancellous bone location and segmentation method based on hybrid Swin-transformer, Comput. Biol. Med., vol. 171, p. 108237, 2024.

[24]
W. Wang, E. Xie, X. Li, D. P. Fan, K. Song, D. Liang, T. Lu, P. Luo, and L. Shao, Pyramid vision transformer: A versatile backbone for dense prediction without convolutions, in Proc. IEEE/CVF Int. Conf. Computer Vision (ICCV ), Montreal, Canada, 2021, pp. 548–558.
[25]

Y. H. Wu, Y. Liu, X. Zhan, and M. M. Cheng, P2T: pyramid pooling transformer for scene understanding, IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 11, pp. 12760–12771, 2023.

[26]

Y. Yuan, W. Liang, H. Ding, Z. Liang, C. Zhang, and H. Hu, Expediting large-scale vision transformer for dense prediction without fine-tuning, IEEE Trans. Pattern Anal. Mach. Intell., vol. 46, no. 1, pp. 250–266, 2024.

[27]
Y. Li, K. Zhang, J. Cao, R. Timofte, M. Magno, L. Benini, and L. V. Goo, LocalViT: Analyzing locality in vision transformers, in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS ), Detroit, MI, USA, 2023, pp. 9598–9605.
[28]
J. Hu, L. Shen, and G. Sun, Squeeze-and-excitation networks, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition. Salt Lake City, UT, USA, 2018, pp. 7132–7141.
[29]
M. Jaderberg, K. Simonyan, and A. Zisserman, Spatial transformer networks, in Proc. 28th International Conf. on Neural Information Processing Systems, Montreal, Canada, 2015, pp. 2017–2025.
[30]
S. Woo, J. Park, J. Y. Lee, and I. S. Kweon, CBAM: Convolutional block attention module, in Proc. 15th European Conf. Computer Vision (ECCV), Munich, Germany, 2018, pp. 3–19.
[31]

H. Huang, Z. Chen, Y. Zou, M. Lu, C. Chen, Y. Song, H. Zhang, and F. Yan, Channel prior convolutional attention for medical image segmentation, Comput. Biol. Med., vol. 178, p. 108784, 2024.

[32]

S. Kang, M. Yang, X. Sharon Qi, J. Jiang, and S. Tan, Bridging feature gaps to improve multi-organ segmentation on abdominal magnetic resonance image, IEEE J. Biomed. Health Inform., vol. 27, no. 3, pp. 1477–1487, 2023.

[33]
A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, H. Adam, and J. Heaton, MobileNets: efficient convolutional neural networks for mobile vision applications, arXiv preprint arXiv: 1704.04861, 2017.
[34]

J. Liu, Q. Chen, Y. Zhang, Z. Wang, X. Deng, and J. Wang, Multi-level feature fusion network combining attention mechanisms for polyp segmentation, Inf. Fusion, vol. 104, p. 102195, 2024.

[35]

Q. Hua, L. Chen, P. Li, S. Zhao, and Y. Li, A pixel–channel hybrid attention model for image processing, Tsinghua Science and Technology, vol. 27, no. 5, pp. 804–816, 2022.

[36]

Z. Wang and J. Wu, Multi-resolution network based image steganalysis model, Intelligent and Converged Networks, vol. 4, no. 3, pp. 198–205, 2023.

[37]

G. C. Ates, P. Mohan, and E. Celik, Dual Cross-Attention for medical image segmentation, Eng. Appl. Artif. Intell., vol. 126, p. 107139, 2023.

[38]

C. Lee, Z. Liao, Y. Li, Q. Lai, Y. Guo, J. Huang, S. Li, Y. Wang, and R. Shi, Placental MRI segmentation based on multi-receptive field and mixed attention separation mechanism, Comput. Methods Programs Biomed., vol. 242, p. 107699, 2023.

[39]

S. I. Amari, Backpropagation and stochastic gradient descent method, Neurocomputing, vol. 5, nos. 4&5, pp. 185–196, 1993.

[40]
J. Chen, Y. Lu, Q. Yu, X. Luo, E. Adeli, Y. Wang, L. Lu, A. L. Yuille, and Y. Zhou, TransUNet: transformers make strong encoders for medical image segmentation, arXiv preprint arXiv: 2102.04306, 2021.
[41]
H. Cao, Y. Wang, J. Chen, D. Jiang, X. Zhang, Q. Tian, and M. Wang, Swin-Unet: Unet-like pure transformer for medical image segmentation, in Proc. 17th European Conf. Computer Vision (ECCV) Workshop, Tel Aviv, Israel, 2022, pp. 205–218,
[42]

H. Chen, Z. Li, X. Huang, Z. Peng, Y. Deng, L. Tang, and L. Yin, SCSONet: Spatial-channel synergistic optimization net for skin lesion segmentation, Front. Phys., vol. 12, p. 1388364, 2024.

[43]
D. Jha, N. K. Tomar, K. Biswas, G. Durak, A. Medetalibeyoglu, M. Antalek, Y. Velichko, D. Ladner, A. Borhani, and U. Bagci, CT liver segmentation via PVT-based encoding and refined decoding, arXiv preprint arXiv: 2401.09630, 2024.
Intelligent and Converged Networks
Pages 336-355
Cite this article:
Li Z, Li G, Zhang L, et al. Honeycomb lung segmentation network based on P2T with CNN two-branch parallelism. Intelligent and Converged Networks, 2024, 5(4): 336-355. https://doi.org/10.23919/ICN.2024.0023

77

Views

8

Downloads

0

Crossref

0

Scopus

Altmetrics

Received: 03 April 2024
Revised: 12 May 2024
Accepted: 03 July 2024
Published: 31 December 2024
© All articles included in the journal are copyrighted to the ITU and TUP.

This work is available under the CC BY-NC-ND 3.0 IGO license:https://creativecommons.org/licenses/by-nc-nd/3.0/igo/

Return