Journal Home > Volume 6 , Issue 3

Cervical cancer is a common gynecological cancer, and its common treatment method radiotherapy depends on target area delineation. The manual delineation work takes a long time and has low accuracy, so automating such delineation is important. At present, some traditional image segmentation algorithms for target area delineation have low accuracy rates. Deep learning algorithms also face some difficulties, such as insufficient data and long training time. As the popular network used in medical image segmentation, U-net still has several disadvantages when handling small targets with unclear boundaries. According to the characteristics of the clinical target volume target segmentation task of cervical cancer, this study modified the U-net structure and optimized the training loss to improve the accuracy of small target detection. The modified structure could handle target boundaries well with operations such as bilinear upsampling. Finally, the proposed algorithm was evaluated on the dataset and compared with several deep learning-based algorithms. Results indicate that the proposed approach has certain superiority.


menu
Abstract
Full text
Outline
About this article

An Encoder-Decoder Network for Automatic Clinical Target Volume Target Segmentation of Cervical Cancer in CT Images

Show Author's information Yizhan Fan1Zhenchao Tao1Jun Lin2,3,4Huanhuan Chen1( )
School of Computer Science and Technology, University of Science and Technology of China, Hefei 230026, China
Alibaba-NTU Singapore Joint Research Institute, Nanyang Technological University, Singapore 639798, Singapore
Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Jinan 250101, China
China-Singapore International Joint Research Institute, Guangzhou 510555, China

Abstract

Cervical cancer is a common gynecological cancer, and its common treatment method radiotherapy depends on target area delineation. The manual delineation work takes a long time and has low accuracy, so automating such delineation is important. At present, some traditional image segmentation algorithms for target area delineation have low accuracy rates. Deep learning algorithms also face some difficulties, such as insufficient data and long training time. As the popular network used in medical image segmentation, U-net still has several disadvantages when handling small targets with unclear boundaries. According to the characteristics of the clinical target volume target segmentation task of cervical cancer, this study modified the U-net structure and optimized the training loss to improve the accuracy of small target detection. The modified structure could handle target boundaries well with operations such as bilinear upsampling. Finally, the proposed algorithm was evaluated on the dataset and compared with several deep learning-based algorithms. Results indicate that the proposed approach has certain superiority.

Keywords: image segmentation, cervical cancer, computed tomography (CT) scan image, encoder-decoder network

References(34)

1

S. E. Waggoner, Cervical cancer, Lancet, vol. 361, no. 9376, pp. 2217–2225, 2003.

2

M. Gong, Y. Liang, J. Shi, W. Ma, and J. Ma, Fuzzy C-means clustering with local information and kernel metric for image segmentation, IEEE Trans. Image Process., vol. 22, no. 2, pp. 573–584, 2013.

3

J. Shan, H. D. Cheng, and Y Wang, Completely automated segmentation approach for breast ultrasound images using multiple-domain features, Ultrasound Med. Biol., vol. 38, no. 2, pp. 262–275, 2012.

4

K. Liu, D. Gong, F. Meng, H. Chen, and G. G. Wang, Gesture segmentation based on a two-phase estimation of distribution algorithm, Inf. Sci., vol. 394–395, pp. 88–105, 2017.

5
J. MacQueen, Some methods for classification and analysis of multivariate observations, in Proc. 5th Berkeley Symp. on Mathematical Statistics and Probability, Oakland, CA, USA, 1967, pp. 281–297.
6
J. C. Bezdek, R. Ehrlich, and W. Full, FCM: The fuzzy c-means clustering algorithm, Comput. Geosci., vol. 10, nos. 2&3, pp. 191–203, 1984.https://doi.org/10.1016/0098-3004(84)90020-7
DOI
7

Y. Yao, Y. Li, B. Jiang, and H. Chen, Multiple kernel k-means clustering by selecting representative kernels, IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 11, pp. 4983–4996, 2021.

8

M. Maška, O. Daněk, S. Garasa, A. Rouzaut, A. Muñoz-Barrutia, and C. Ortiz-de-Solorzano, Segmentation and shape tracking of whole fluorescent cells based on the Chan–Vese model, IEEE Trans. Med. Imaging, vol. 32, no. 6, pp. 995–1006, 2013.

9

A. Tremeau and N. Borel, A region growing and merging algorithm to color segmentation, Pattern Recognit., vol. 30, no. 7, pp. 1191–1203, 1997.

10

X. Shu, Y. Yang, and B. Wu, Adaptive segmentation model for liver CT images based on neural network and level set method, Neurocomputing, vol. 453, pp. 438–452, 2021.

11

B. Wang, K. Chen, X. Tian, Y. Yang, and X. Zhang, An effective deep network for automatic segmentation of complex lung tumors in CT images, Med. Phys., vol. 48, no. 9, pp. 5004–5016, 2021.

12

K. B. Chen, Y. Xuan, A. J. Lin, and S. H. Guo, Lung computed tomography image segmentation based on U-net network fused with dilated convolution, Comput. Methods Programs Biomed., vol. 207, p. 106170, 2021.

13
O. Ronneberger, P. Fischer, and T. Brox, U-Net: Convolutional networks for biomedical image segmentation, in Proc. 18th Int. Conf. on Medical Image Computing and Computer-assisted Intervention, Munich, Germany, 2015, pp. 234–241.https://doi.org/10.1007/978-3-319-24574-4_28
DOI
14

Y. Xue, T. Xu, H. Zhang, L. R. Long, and X. Huang, SegAN: Adversarial network with multi-scale L1 loss for medical image segmentation, Neuroinformatics, vol. 16, no. 3, pp. 383–392, 2018.

15
M. Aleem, R. Raj, and A. Khan, Comparative performance analysis of the RESNET backbones of mask RCNN to segment the signs of COVID-19 in chest CT scans. arXiv preprint arXiv: 2008.09713, 2020.
16
O. Elharrouss, N. Subramanian, and S. Al-Maadeed, An encoder-decoder-based method for COVID-19 lung infection segmentation, arXiv preprint arXiv: 2007.00861, 2020.https://doi.org/10.29117/quarfe.2020.0294
DOI
17

G. Wang, W. Li, M. A. Zuluaga, R. Pratt, P. A. Patel, M. Aertsen, T. Doel, A. L. David, J. Deprest, S. Ourselin, et al., Interactive medical image segmentation using deep learning with image-specific fine tuning, IEEE Trans. Med. Imaging, vol. 37, no. 7, pp. 1562–1573, 2018.

18

E. Gibson, F. Giganti, Y. Hu, E. Bonmati, S. Bandula, K. Gurusamy, B. Davidson, S. P. Pereira, M. J. Clarkson, and D. C. Barratt, Automatic multi-organ segmentation on abdominal CT with dense V-networks, IEEE Trans. Med. Imaging, vol. 37, no. 8, pp. 1822–1834, 2018.

19
H. R. Roth, A. Farag, L. Lu, E. B. Turkbey, and R. M. Summers, Deep convolutional networks for pancreas segmentation in CT imaging, in Proc. of SPIE 9413, Medical Imaging 2015, Orlando, FL, USA, 2015, p. 94131G.https://doi.org/10.1117/12.2081420
DOI
20

W. Li, F. Jia, and Q. Hu, Automatic segmentation of liver tumor in CT images with deep convolutional neural networks, J. Comput. Commun., vol. 3, no. 11, pp. 146–151, 2015.

21

B. Ait Skourt, A. El Hassani, and A. Majda, Lung CT image segmentation using deep neural networks, Procedia Comput. Sci., vol. 127, pp. 109–113, 2018.

22

B. Ibragimov and L. Xing, Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks, Med. Phys., vol. 44, no. 2, pp. 547–557, 2017.

23

X. Dong, Y. Lei, T. Wang, M. Thomas, L. Tang, W. J. Curran, T. Liu, and X. Yang, Automatic multiorgan segmentation in thorax CT images using U-net-GAN, Med. Phys., vol. 46, no. 5, pp. 2157–2168, 2019.

24

Z. Han, B. Wei, A. Mercado, S. Leung, and S. Li, Spine-GAN: Semantic segmentation of multiple spinal structures, Med. Image Anal., vol. 50, pp. 23–35, 2018.

25
I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, Generative adversarial nets, in Proc. of the 27th Int. Conf. on Neural Information Processing Systems, Montreal, Canada, 2014, pp. 2672–2680.
26

C. Ren, X. Wang, J. Gao, X. Zhou, and H. Chen, Unsupervised change detection in satellite images with generative adversarial network, IEEE Trans. Geosci. Remote Sens., vol. 59, no. 12, pp. 10047–10061, 2021.

27

Y. Zhu, B. Huang, J. Gao, E. Huang, and H. Chen, Adaptive polygon generation algorithm for automatic building extraction, IEEE Trans. Geosci. Remote Sens., vol. 60, p. 4702114, 2021.

28

X. Zhou, H. Chen, and J. Li, Probabilistic mixture model for mapping the underground pipes, ACM Trans. Knowl. Discov. Data, vol. 13, no. 5, p. 47, 2019.

29

G. Jiang, X. Zhou, J. Li, and H. Chen, A cable-mapping algorithm based on ground-penetrating radar, IEEE Geosci. Remote Sens. Lett., vol. 16, no. 10, pp. 1630–1634, 2019.

30

X. Zhou, H. Chen, and T. Hao, Efficient detection of buried plastic pipes by combining GPR and electric field methods, IEEE Trans. Geosci. Remote Sens., vol. 57, no. 6, pp. 3967–3979, 2019.

31

X. Zhou, H. Chen, and J. Li, An automatic GPR B-scan image interpreting model, IEEE Trans. Geosci. Remote Sens., vol. 56, no. 6, pp. 3398–3412, 2018.

32
Z. Gong and H. Chen, Model-based oversampling for imbalanced sequence classification, in Proc. 25th ACM Int. on Conf. on Information and Knowledge Management, Indianapolis, IN, USA, 2016, pp.1009–1018.https://doi.org/10.1145/2983323.2983784
DOI
33

H. Chen and X. Yao, Regularized negative correlation learning for neural network ensembles, IEEE Trans. Neural Netw., vol. 20, no. 12, pp. 1962–1979, 2009.

34

H. Chen and X. Yao, Multiobjective neural network ensembles based on regularized negative correlation learning, IEEE Trans. Knowl. Data Eng., vol. 22, no. 12, pp. 1738–1751, 2010.

Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 09 February 2022
Revised: 24 March 2022
Accepted: 06 April 2022
Published: 09 August 2022
Issue date: September 2022

Copyright

© The author(s) 2022

Acknowledgements

Acknowledgment

This work was supported by the National Natural Science Foundation of China (Nos. 62176245 and 62137002), Key Research and Development Program of Anhui Province (No. 202104a05020011), Key Science and Technology Special Project of Anhui Province (No. 202103a07020002), and the Fundamental Research Funds for the Central Universities.

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return