Performance evaluation of large language models on Korean medical licensing examination: a three-year comparative analysis

大型语言模型在韩国医师执照考试中的表现评估:一项为期三年的对比分析

阅读:1

Abstract

Performance evaluation of large language models (LLMs) in non-English medical contexts remains limited, particularly for medical licensing examinations including both text- and image-based questions. Therefore, the performance and reliability of three LLMs-GPT-4o, Claude 3.5 Sonnet, and Gemini 1.5 Pro was evaluated using the Korean Medical Licensing Examination (KMLE) questions from 2022 to 2024. We analyzed 942 KMLE questions encompassing text-only and image-based formats across various medical specialties. Reproducibility was evaluated through repeated testing, and inter-model agreement was analyzed using pairwise comparisons. GPT-4o achieved the highest accuracy (83.2%), followed by Claude 3.5 Sonnet (79.5%) and Gemini 1.5 Pro (76.6%). While GPT-4o and Claude 3.5 Sonnet outperformed text-only questions, Gemini 1.5 Pro consistently performed across both question formats. LLMs demonstrated the strongest performance in internal medicine, pediatrics, and psychiatry, with relatively weak results in medical law. Reproducibility was outstanding, with Claude 3.5 Sonnet, Gemini 1.5 Pro, and GPT-4o showing 99.9%, 99.5%, and 97.7%, respectively. Strong inter-model agreement was observed, particularly between GPT-4o and Claude 3.5 Sonnet. LLMs demonstrate competent performance in medical knowledge assessments, even in non-English contexts, although challenges persist in processing image-based questions and specialized domains. This study provides valuable insights that may inform the future development and application of LLMs in medical education and assessment, although further validation in real-world educational settings is necessary to establish their practical utility.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。