EYE-Llama, an in-domain large language model for ophthalmology

EYE-Llama,一个用于眼科领域的大型语言模型

阅读:1

Abstract

BACKGROUND: Training Large Language Models (LLMs) with in-domain data can significantly enhance their performance, leading to more accurate and reliable question-answering (Q&A) systems essential for supporting clinical decision-making and educating patients. METHODS: This study introduces ophthalmic LLMs trained on in-domain, well-curated datasets. We present an open-source substantial ophthalmic language dataset for model training. Our models (EYE-Llama), were pre-trained on an ophthalmology-specific dataset, including paper abstracts, textbooks, and Wikipedia articles. Subsequently, the models underwent fine-tuning using a diverse range of QA pairs. Our models were compared to baseline Llama 2, ChatDoctor, Meditron, Llama 3, and ChatGPT (GPT3·5) models, using four distinct test sets, and evaluated quantitatively (Accuracy, F1 score, BERTScore, BARTScore and BLEU score) and qualitatively by two ophthalmologists. FINDINGS: Upon evaluating the models using the synthetic dialogue test set with three different metrics (BERTScore, BARTScore, and BLEU score), our models demonstrated superior performance. Specifically, when evaluated using BERTScore, our models surpassed Llama 2, Llama 3, Meditron, and ChatDoctor in terms of F1 score, and performed on par with ChatGPT, which was trained with 175 billion parameters (EYE-Llama: 0.57, Llama 2: 0.56, Llama 3: 0.55, Meditron: 0.50, ChatDoctor: 0.56, and ChatGPT: 0.57). Additionally, the EYE-Llama model outperformed the above models when evaluated using BARTScore and BLEU scores. When tested on the MedMCQA test set, the fine-tuned models exhibited higher accuracy compared to Llama 2, Meditron, and ChatDoctor models (EYE-Llama: 0.39, Llama 2: 0.33, ChatDoctor: 0.29, Meditron: 0.22). However, ChatGPT, and Llama 3 models outperformed EYE-Llama, achieving accuracies of 0.55, 0.78, and 0.90, respectively. On the PubmedQA test set, our model showed improved accuracy over all other models (EYE-Llama: 0.96, Llama 2: 0.90, Llama 3: 0.92, Meditron: 0.76, ChatGPT: 0.93, ChatDoctor: 0.92). INTERPRETATION: The study shows that pre-training and fine-tuning LLMs like EYE-Llama enhances their performance in specific medical domains. Our EYE-Llama models surpass baseline Llama 2 in all evaluations, highlighting the effectiveness of specialized LLMs in medical QA systems. FUNDING: Funded by NEI R15EY035804 (MNA), R21EY035271 (MNA), and UNC Charlotte Faculty Research Grant (MNA).

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。