Report Generation System for Slit-Lamp Image Interpretation Using Vision-Language Models

基于视觉语言模型的裂隙灯图像判读报告生成系统

阅读:1

Abstract

INTRODUCTION: The aim of this study was to develop an interpretation pipeline for report generation for slit-lamp (SL) images, using vision-language models (VLMs). METHODS: An image-text alignment module (Bootstrapping Language-Image Pretraining [BLIP]) for report generation was developed using a dataset of SL images paired with medical reports from Zhejiang Provincial People's Hospital (Zhaohui Hospital). These data were split into training and internal validation sets to finetune the BLIP frameworks (LLaVA and Qwen2.5-VL). A dataset from Bijie Hospital was used as an external validation dataset for the evaluation of the frameworks. RESULTS: The Zhaohui Hospital dataset included 1612 SL images and medical reports. The Bijie Hospital dataset included 100 SL images and medical reports. For the refined LLaVA framework, the BLEU scores (1-4) were 0.560-0.715, ROUGE-L score was 0.731, CIDEr score was 1.712, and SPICE score was 0.329. For the refined Qwen2.5-VL framework, the BLEU scores (1-4) were 0.536-0.696, ROUGE-L score was 0.696, CIDEr score was 1.729, and SPICE score was 0.208. Both models also showed good results in the classification of diseases. The overall accuracy was 0.87 in the refined LLaVA framework and 0.88 in the refined Qwen2.5-VL framework, with high accuracies (≥ 0.9) observed for eyelid disease, pterygium, glaucoma, corneal disease, and conjunctivitis. Interobserver agreement among ophthalmologists was substantial, with κ scores between 0.714 and 0.777. In the evaluation, the 100 reports generated by the LLaVA model demonstrated strong performance across all four metrics-correctness (2.72), completeness (2.79), harmlessness (2.88), and satisfaction (2.73)-with each scoring above 2.7. In comparison, the 100 reports produced by the Qwen model received slightly lower scores than LLaVA in correctness (2.63), completeness (2.70), and satisfaction (2.71). CONCLUSIONS: This study introduced a framework for SL report generation, which enhanced ophthalmic image interpretation and highlighted the potential of VLMs to assist ophthalmologists and patients.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。