Journal Home > Volume 26 , Issue 5

The current mode of clinical aided diagnosis of Ocular Myasthenia Gravis (OMG) is time-consuming and laborious, and it lacks quantitative standards. An aided diagnostic system for OMG is proposed to solve this problem. The values calculated by the system include three clinical indicators: eyelid distance, sclera distance, and palpebra superior fatigability test time. For the first two indicators, the semantic segmentation method was used to extract the pathological features of the patient’s eye image and a semantic segmentation model was constructed. The patient eye image was divided into three regions: iris, sclera, and background. The indicators were calculated based on the position of the pixels in the segmentation mask. For the last indicator, a calculation method based on the Eyelid Aspect Ratio (EAR) is proposed; this method can better reflect the change of eyelid distance over time. The system was evaluated based on the collected patient data. The results show that the segmentation model achieves a mean Intersection-Over-Union (mIoU) value of 86.05%. The paired-sample T-test was used to compare the results obtained by the system and doctors, and the p values were all greater than 0.05. Thus, the system can reduce the cost of clinical diagnosis and has high application value.


menu
Abstract
Full text
Outline
About this article

A Computer-Aided System for Ocular Myasthenia Gravis Diagnosis

Show Author's information Guanjie LiuYan WeiYunshen XieJianqiang LiLiyan QiaoJi-jiang Yang( )
Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
Neurology Department, The Second Affiliated Hospital of Tsinghua University, Beijing 100040, China
Department of Automation, Tsinghua University, Beijing 100084, China

Abstract

The current mode of clinical aided diagnosis of Ocular Myasthenia Gravis (OMG) is time-consuming and laborious, and it lacks quantitative standards. An aided diagnostic system for OMG is proposed to solve this problem. The values calculated by the system include three clinical indicators: eyelid distance, sclera distance, and palpebra superior fatigability test time. For the first two indicators, the semantic segmentation method was used to extract the pathological features of the patient’s eye image and a semantic segmentation model was constructed. The patient eye image was divided into three regions: iris, sclera, and background. The indicators were calculated based on the position of the pixels in the segmentation mask. For the last indicator, a calculation method based on the Eyelid Aspect Ratio (EAR) is proposed; this method can better reflect the change of eyelid distance over time. The system was evaluated based on the collected patient data. The results show that the segmentation model achieves a mean Intersection-Over-Union (mIoU) value of 86.05%. The paired-sample T-test was used to compare the results obtained by the system and doctors, and the p values were all greater than 0.05. Thus, the system can reduce the cost of clinical diagnosis and has high application value.

Keywords: semantic segmentation, ocular myasthenia gravis, computer-aided system, eyelid aspect ratio

References(26)

[1]
B. R. Thanvi and T. C. Lo, Update on myasthenia gravis, Postgraduate Medical Journal, vol. 80, no. 950, pp. 690–700, 2004.
[2]
J. C. W. Deenen, C. G. C. Horlings, J. J. G. M. Verschuuren, A. L. M. Verbeek, and B. G. M. van Engelen, The epidemiology of neuromuscular disorders: A comprehensive overview of the literature, Journal of Neuromuscular Diseases, vol. 2, no. 1, pp. 73–85, 2015.
[3]
K. E. Osserman, P. Kornfeld, E. Cohen, G. Genkins, H. Mendelow, H. Goldberg, H. Windsley, and L. I. Kaplan, Studies in myasthenia gravis: Review of two hundred eighty-two cases at the Mount Sinai Hospital, New York City, A.M.A. Archives of Internal Medicine, vol. 102, no. 1, pp. 72–81, 1958.
[4]
S. H. Wong, A. Petrie, and G. T. Plant, Ocular myasthenia gravis: Toward a risk of generalization score and sample size calculation for a randomized controlled trial of disease modification, Journal of Neuro-ophthalmology, vol. 36, no. 3, pp. 252–258, 2016.
[5]
M. Benatar, A systematic review of diagnostic studies in myasthenia gravis, Neuromuscular Disorders, vol. 16, no. 7, pp. 459–467, 2006.
[6]
K. Scherer, R. S. Bedlack, and D. L. Simel, Does this patient have myasthenia gravis? JAMA, vol. 293, no. 15, pp. 1906–1914, 2005.
[7]
G. Sciacca, E. Reggio, G. Mostile, A. Nicoletti, F. Drago, S. Salomone, and M. Zappia, Clinical and CN-SFEMG evaluation of neostigmine test in myasthenia gravis, Neurological Sciences, vol. 39, no. 2, pp. 341–345, 2018.
[8]
K. Doi, Computer-aided diagnosis in medical imaging: Historical review, current status and future potential, Computerized Medical Imaging and Graphics, vol. 31, nos. 4&5, pp. 198–211, 2007.
[9]
J. Liu, Y. Pan, M. Li, Z. Y. Chen, L. Tang, C. Q. Lu, and J. X. Wang, Applications of deep learning to MRI images: A survey, Big Data Mining and Analytics, vol. 1, no. 1, pp. 1–18, 2018.
[10]
R. P. Kosilek, J. Schopohl, M. Grunke, M. Reincke, C. Dimopoulou, G. K. Stalla, R. P. Würtz, A. Lammert, M. Günther, and H. J. Schneider, Automatic face classification of Cushing’s syndrome in women–A novel screening approach, Experimental and Clinical Endocrinology & Diabetes, vol. 121, no. 9, pp. 561–564, 2013.
[11]
Y. H. Chen, W. J. Liu, L. Zhang, M. Y. Yan, and Y. J. Zeng, Hybrid facial image feature extraction and recognition for non-invasive chronic fatigue syndrome diagnosis, Computers in Biology and Medicine, vol. 64, pp. 30–39, 2015.
[12]
W. A. Song, Y. Lei, S. Chen, Z. X. Pan, J. J. Yang, H. Pan, X. L. Du, W. B. Cai, and Q. Wang, Multiple facial image features-based recognition for the automatic diagnosis of turner syndrome, Comput. Ind., vol. 100, pp. 85–95, 2018.
[13]
X. Hu, Q. Y. Zhang, J. J. Yang, Q. Wang, Y. Lei, and J. L. Wu, Photographic analysis and machine learning for diagnostic prediction of adenoid hypertrophy, in Proc. 2019 IEEE 16th Int. Conf. Networking, Sensing and Control, Banff, Canada, 2019, pp. 7–11.
[14]
Y. Yu, M. Li, L. L. Liu, Y. H. Li, and J. X. Wang, Clinical big data and deep learning: Applications, challenges, and future outlooks, Big Data Mining and Analytics, vol. 2, no. 4, pp. 288–305, 2019.
[15]
P. Shukla, T. Gupta, A. Saini, P. Singh, and R. Balasubramanian, A deep learning frame-work for recognizing developmental disorders, in Proc. 2017 IEEE Winter Conf. Applications of Computer Vision, Santa Rosa, CA, USA, 2017, pp. 705–714.
[16]
V. Badrinarayanan, A. Kendall, and R. Cipolla, SegNet: A deep convolutional encoder-decoder architecture for image segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 12, pp. 2481–2495, 2017.
[17]
O. Ronneberger, P. Fischer, and T. Brox, U-Net: Convolutional networks for biomedical image segmentation, presented at the Int. Conf. Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015, pp. 234–241.
[18]
A. G. Howard, M. L. Zhu, B. Chen, D. Kalenichenko, W. J. Wang, T. Weyand, M. Andreetto, and H. Adam, MobileNets: Efficient convolutional neural networks for mobile vision applications, arXiv preprint arXiv: 1704.04861, 2017.
[19]
V. Dumoulin and F. Visin, A guide to convolution arithmetic for deep learning, arXiv preprint arXiv: 1603.07285, 2016.
[20]
H. Noh, S. Hong, and B. Han, Learning deconvolution network for semantic segmentation, in Proc. 2015 IEEE Int. Conf. Computer Vision, Santiago, Chile, 2015, pp. 1520–1528.
[21]
W. Zhong, N. Yu, and C. Y. Ai, Applying big data based deep learning system to intrusion detection, Big Data Mining and Analytics, vol. 3, no. 3, pp. 181–195, 2020.
[22]
W. W. Jiang and L. Zhang, Geospatial data to images: A deep-learning framework for traffic forecasting, Tsinghua Science and Technology, vol. 24, no. 1, pp. 52–64, 2019.
[23]
T. Soukupová and J. Čech, Real-time eye blink detection using facial landmarks, in Proc. 21st Computer Vision Winter Workshop, Rimske Toplice, Slovenia, 2016, pp. 1–8.
[24]
J. Long, E. Shelhamer, and T. Darrell, Fully convolutional networks for semantic segmentation, in Proc. 2015 IEEE Conf. Computer Vision and Pattern Recognition, Boston, MA, USA, 2015, pp. 3431–3440.
[25]
H. S. Zhao, J. P. Shi, X. J. Qi, X. G. Wang, and J. Y. Jia, Pyramid scene parsing network, in Proc. 2017 IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 6230–6239.
[26]
B. Luo, J. Shen, Y. Wang, and M. Pantic, The ibug eye segmentation dataset, presented at the 2018 Imperial College Computing Student Workshop, London, UK, 2019.
Publication history
Copyright
Rights and permissions

Publication history

Received: 27 February 2021
Accepted: 15 March 2021
Published: 20 April 2021
Issue date: October 2021

Copyright

© The author(s) 2021

Rights and permissions

© The author(s) 2021. The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return