Abstract
BACKGROUND: Training Large Language Models (LLMs) with in-domain data can significantly enhance their performance, leading to more accurate and reliable question-answering (Q&A) systems essential for supporting clinical decision-making and educating patients. METHODS: This study introduces ophthalmic LLMs trained on in-domain, well-curated datasets. We present an open-source substantial ophthalmic language dataset for model training. Our models (EYE-Llama), were pre-trained on an ophthalmology-specific dataset, including paper abstracts, textbooks, and Wikipedia articles. Subsequently, the models underwent fine-tuning using a diverse range of QA pairs. Our models were compared to baseline Llama 2, ChatDoctor, Meditron, Llama 3, and ChatGPT (GPT3·5) models, using four distinct test sets, and evaluated quantitatively (Accuracy, F1 score, BERTScore, BARTScore and BLEU score) and qualitatively by two ophthalmologists. FINDINGS: Upon evaluating the models using the synthetic dialogue test set with three different metrics (BERTScore, BARTScore, and BLEU score), our models demonstrated superior performance. Specifically, when evaluated using BERTScore, our models surpassed Llama 2, Llama 3, Meditron, and ChatDoctor in terms of F1 score, and performed on par with ChatGPT, which was trained with 175 billion parameters (EYE-Llama: 0.57, Llama 2: 0.56, Llama 3: 0.55, Meditron: 0.50, ChatDoctor: 0.56, and ChatGPT: 0.57). Additionally, the EYE-Llama model outperformed the above models when evaluated using BARTScore and BLEU scores. When tested on the MedMCQA test set, the fine-tuned models exhibited higher accuracy compared to Llama 2, Meditron, and ChatDoctor models (EYE-Llama: 0.39, Llama 2: 0.33, ChatDoctor: 0.29, Meditron: 0.22). However, ChatGPT, and Llama 3 models outperformed EYE-Llama, achieving accuracies of 0.55, 0.78, and 0.90, respectively. On the PubmedQA test set, our model showed improved accuracy over all other models (EYE-Llama: 0.96, Llama 2: 0.90, Llama 3: 0.92, Meditron: 0.76, ChatGPT: 0.93, ChatDoctor: 0.92). INTERPRETATION: The study shows that pre-training and fine-tuning LLMs like EYE-Llama enhances their performance in specific medical domains. Our EYE-Llama models surpass baseline Llama 2 in all evaluations, highlighting the effectiveness of specialized LLMs in medical QA systems. FUNDING: Funded by NEI R15EY035804 (MNA), R21EY035271 (MNA), and UNC Charlotte Faculty Research Grant (MNA).