Abstract
INTRODUCTION: The aim of this study was to develop an interpretation pipeline for report generation for slit-lamp (SL) images, using vision-language models (VLMs). METHODS: An image-text alignment module (Bootstrapping Language-Image Pretraining [BLIP]) for report generation was developed using a dataset of SL images paired with medical reports from Zhejiang Provincial People's Hospital (Zhaohui Hospital). These data were split into training and internal validation sets to finetune the BLIP frameworks (LLaVA and Qwen2.5-VL). A dataset from Bijie Hospital was used as an external validation dataset for the evaluation of the frameworks. RESULTS: The Zhaohui Hospital dataset included 1612 SL images and medical reports. The Bijie Hospital dataset included 100 SL images and medical reports. For the refined LLaVA framework, the BLEU scores (1-4) were 0.560-0.715, ROUGE-L score was 0.731, CIDEr score was 1.712, and SPICE score was 0.329. For the refined Qwen2.5-VL framework, the BLEU scores (1-4) were 0.536-0.696, ROUGE-L score was 0.696, CIDEr score was 1.729, and SPICE score was 0.208. Both models also showed good results in the classification of diseases. The overall accuracy was 0.87 in the refined LLaVA framework and 0.88 in the refined Qwen2.5-VL framework, with high accuracies (≥ 0.9) observed for eyelid disease, pterygium, glaucoma, corneal disease, and conjunctivitis. Interobserver agreement among ophthalmologists was substantial, with κ scores between 0.714 and 0.777. In the evaluation, the 100 reports generated by the LLaVA model demonstrated strong performance across all four metrics-correctness (2.72), completeness (2.79), harmlessness (2.88), and satisfaction (2.73)-with each scoring above 2.7. In comparison, the 100 reports produced by the Qwen model received slightly lower scores than LLaVA in correctness (2.63), completeness (2.70), and satisfaction (2.71). CONCLUSIONS: This study introduced a framework for SL report generation, which enhanced ophthalmic image interpretation and highlighted the potential of VLMs to assist ophthalmologists and patients.