Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Radiology report generation aims to produce textual reports automatically based on input images, a critical process that aids in accurate diagnoses and lightens the workload of radiologists. Following recent advances in Large Language Models (LLMs), several Retrieval-Augmented Generation (RAG) based report generation models have been proposed. Despite the continuously improved performance, these report generation models often suffer from two main limitations, i.e., interference of irrelevant information, and lack of alignment between the input image and the resulting generated report. In this study, we propose the Semantic feedback based RAG Radiology report generation model, namely RAGSemRad. RAGSemRad comprises two key components: the fine-grained semantic retrieval module and the semantic assessment module. The fine-grained semantic retrieval module is designed to retrieve adequate and relevant prompt information, while ignoring irrelevant interference. This is achieved by clustering the data at the semantic level and leveraging the domain knowledge within a large pre-trained visual-language model, thus alleviating the issues of hallucination and databias. Further, the semantic assessment module enhances the performance of the upper bound by enhancing the alignment between the input image and the resulting generated report, utilizing supervision signals derived from paired image-label data. Experimental evaluations are conducted on two benchmarks, IU X-Ray and MIMIC-CXR, to assess the performance of RAGSemRad. The results demonstrate RAGSemRad exhibits competitive performance compared to the state-of-the-art methods, showcasing its potential to advance automatic radiology report generation.
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).
Comments on this article