AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (7.7 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research paper | Open Access

Developing a diagnostic support system for audiogram interpretation using deep learning-based object detection

Titipat Achakulvisut1Suchanon Phanthong2Thanawut Timpitak1Kanpat Vesessook3Sirinan Junthong2Withita Utainrat2Kanokrat Bunnag2 ( )
Department of Biomedical Engineering, Faculty of Engineering, Mahidol University
Department of Otolaryngology, Faculty of Medicine Vajira Hospital, Navamindradhiraj University
International Community School, Bangkok, Thailand
Show Author Information

Abstract

Objective

To develop and evaluate an automated system for digitizing audiograms, classifying hearing loss levels, and comparing their performance with traditional methods and otolaryngologists' interpretations.

Designed and Methods

We conducted a retrospective diagnostic study using 1,959 audiogram images from patients aged 7 years and older at the Faculty of Medicine, Vajira Hospital, Navamindradhiraj University. We employed an object detection approach to digitize audiograms and developed multiple machine learning models to classify six hearing loss levels. The dataset was split into 70% training (1,407 images) and 30% testing (352 images) sets. We compared our model's performance with classifications based on manually extracted audiogram values and otolaryngologists' interpretations.

Result

Our object detection-based model achieved an F1-score of 94.72% in classifying hearing loss levels, comparable to the 96.43% F1-score obtained using manually extracted values. The Light Gradient Boosting Machine (LGBM) model is used as the classifier for the manually extracted data, which achieved top performance with 94.72% accuracy, 94.72% f1-score, 94.72 recall, and 94.72 precision. In object detection based model, The Random Forest Classifier (RFC) model showed the highest 96.43% accuracy in predicting hearing loss level, with a F1-score of 96.43%, recall of 96.43%, and precision of 96.45%.

Conclusion

Our proposed automated approach for audiogram digitization and hearing loss classification performs comparably to traditional methods and otolaryngologists' interpretations. This system can potentially assist otolaryngologists in providing more timely and effective treatment by quickly and accurately classifying hearing loss.

References

【1】
【1】
 
 
Journal of Otology
Pages 26-32

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Achakulvisut T, Phanthong S, Timpitak T, et al. Developing a diagnostic support system for audiogram interpretation using deep learning-based object detection. Journal of Otology, 2025, 20(1): 26-32. https://doi.org/10.26599/JOTO.2025.9540005

1444

Views

246

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Received: 04 August 2024
Accepted: 16 January 2025
Published: 20 March 2025
© 2025 PLA General Hospital Department of Otolaryngology Head and Neck Surgery. Publishing services by Tsinghua University Press.

This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).