Cervical cancer is a common gynecological cancer, and its common treatment method radiotherapy depends on target area delineation. The manual delineation work takes a long time and has low accuracy, so automating such delineation is important. At present, some traditional image segmentation algorithms for target area delineation have low accuracy rates. Deep learning algorithms also face some difficulties, such as insufficient data and long training time. As the popular network used in medical image segmentation, U-net still has several disadvantages when handling small targets with unclear boundaries. According to the characteristics of the clinical target volume target segmentation task of cervical cancer, this study modified the U-net structure and optimized the training loss to improve the accuracy of small target detection. The modified structure could handle target boundaries well with operations such as bilinear upsampling. Finally, the proposed algorithm was evaluated on the dataset and compared with several deep learning-based algorithms. Results indicate that the proposed approach has certain superiority.
S. E. Waggoner, Cervical cancer, Lancet, vol. 361, no. 9376, pp. 2217–2225, 2003.
M. Gong, Y. Liang, J. Shi, W. Ma, and J. Ma, Fuzzy C-means clustering with local information and kernel metric for image segmentation, IEEE Trans. Image Process., vol. 22, no. 2, pp. 573–584, 2013.
J. Shan, H. D. Cheng, and Y Wang, Completely automated segmentation approach for breast ultrasound images using multiple-domain features, Ultrasound Med. Biol., vol. 38, no. 2, pp. 262–275, 2012.
K. Liu, D. Gong, F. Meng, H. Chen, and G. G. Wang, Gesture segmentation based on a two-phase estimation of distribution algorithm, Inf. Sci., vol. 394–395, pp. 88–105, 2017.
Y. Yao, Y. Li, B. Jiang, and H. Chen, Multiple kernel k-means clustering by selecting representative kernels, IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 11, pp. 4983–4996, 2021.
M. Maška, O. Daněk, S. Garasa, A. Rouzaut, A. Muñoz-Barrutia, and C. Ortiz-de-Solorzano, Segmentation and shape tracking of whole fluorescent cells based on the Chan–Vese model, IEEE Trans. Med. Imaging, vol. 32, no. 6, pp. 995–1006, 2013.
A. Tremeau and N. Borel, A region growing and merging algorithm to color segmentation, Pattern Recognit., vol. 30, no. 7, pp. 1191–1203, 1997.
X. Shu, Y. Yang, and B. Wu, Adaptive segmentation model for liver CT images based on neural network and level set method, Neurocomputing, vol. 453, pp. 438–452, 2021.
B. Wang, K. Chen, X. Tian, Y. Yang, and X. Zhang, An effective deep network for automatic segmentation of complex lung tumors in CT images, Med. Phys., vol. 48, no. 9, pp. 5004–5016, 2021.
K. B. Chen, Y. Xuan, A. J. Lin, and S. H. Guo, Lung computed tomography image segmentation based on U-net network fused with dilated convolution, Comput. Methods Programs Biomed., vol. 207, p. 106170, 2021.
Y. Xue, T. Xu, H. Zhang, L. R. Long, and X. Huang, SegAN: Adversarial network with multi-scale L1 loss for medical image segmentation, Neuroinformatics, vol. 16, no. 3, pp. 383–392, 2018.
G. Wang, W. Li, M. A. Zuluaga, R. Pratt, P. A. Patel, M. Aertsen, T. Doel, A. L. David, J. Deprest, S. Ourselin, et al., Interactive medical image segmentation using deep learning with image-specific fine tuning, IEEE Trans. Med. Imaging, vol. 37, no. 7, pp. 1562–1573, 2018.
E. Gibson, F. Giganti, Y. Hu, E. Bonmati, S. Bandula, K. Gurusamy, B. Davidson, S. P. Pereira, M. J. Clarkson, and D. C. Barratt, Automatic multi-organ segmentation on abdominal CT with dense V-networks, IEEE Trans. Med. Imaging, vol. 37, no. 8, pp. 1822–1834, 2018.
W. Li, F. Jia, and Q. Hu, Automatic segmentation of liver tumor in CT images with deep convolutional neural networks, J. Comput. Commun., vol. 3, no. 11, pp. 146–151, 2015.
B. Ait Skourt, A. El Hassani, and A. Majda, Lung CT image segmentation using deep neural networks, Procedia Comput. Sci., vol. 127, pp. 109–113, 2018.
B. Ibragimov and L. Xing, Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks, Med. Phys., vol. 44, no. 2, pp. 547–557, 2017.
X. Dong, Y. Lei, T. Wang, M. Thomas, L. Tang, W. J. Curran, T. Liu, and X. Yang, Automatic multiorgan segmentation in thorax CT images using U-net-GAN, Med. Phys., vol. 46, no. 5, pp. 2157–2168, 2019.
Z. Han, B. Wei, A. Mercado, S. Leung, and S. Li, Spine-GAN: Semantic segmentation of multiple spinal structures, Med. Image Anal., vol. 50, pp. 23–35, 2018.
C. Ren, X. Wang, J. Gao, X. Zhou, and H. Chen, Unsupervised change detection in satellite images with generative adversarial network, IEEE Trans. Geosci. Remote Sens., vol. 59, no. 12, pp. 10047–10061, 2021.
Y. Zhu, B. Huang, J. Gao, E. Huang, and H. Chen, Adaptive polygon generation algorithm for automatic building extraction, IEEE Trans. Geosci. Remote Sens., vol. 60, p. 4702114, 2021.
X. Zhou, H. Chen, and J. Li, Probabilistic mixture model for mapping the underground pipes, ACM Trans. Knowl. Discov. Data, vol. 13, no. 5, p. 47, 2019.
G. Jiang, X. Zhou, J. Li, and H. Chen, A cable-mapping algorithm based on ground-penetrating radar, IEEE Geosci. Remote Sens. Lett., vol. 16, no. 10, pp. 1630–1634, 2019.
X. Zhou, H. Chen, and T. Hao, Efficient detection of buried plastic pipes by combining GPR and electric field methods, IEEE Trans. Geosci. Remote Sens., vol. 57, no. 6, pp. 3967–3979, 2019.
X. Zhou, H. Chen, and J. Li, An automatic GPR B-scan image interpreting model, IEEE Trans. Geosci. Remote Sens., vol. 56, no. 6, pp. 3398–3412, 2018.
H. Chen and X. Yao, Regularized negative correlation learning for neural network ensembles, IEEE Trans. Neural Netw., vol. 20, no. 12, pp. 1962–1979, 2009.
This work was supported by the National Natural Science Foundation of China (Nos. 62176245 and 62137002), Key Research and Development Program of Anhui Province (No. 202104a05020011), Key Science and Technology Special Project of Anhui Province (No. 202103a07020002), and the Fundamental Research Funds for the Central Universities.
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).