Abstract
Performance evaluation of large language models (LLMs) in non-English medical contexts remains limited, particularly for medical licensing examinations including both text- and image-based questions. Therefore, the performance and reliability of three LLMs-GPT-4o, Claude 3.5 Sonnet, and Gemini 1.5 Pro was evaluated using the Korean Medical Licensing Examination (KMLE) questions from 2022 to 2024. We analyzed 942 KMLE questions encompassing text-only and image-based formats across various medical specialties. Reproducibility was evaluated through repeated testing, and inter-model agreement was analyzed using pairwise comparisons. GPT-4o achieved the highest accuracy (83.2%), followed by Claude 3.5 Sonnet (79.5%) and Gemini 1.5 Pro (76.6%). While GPT-4o and Claude 3.5 Sonnet outperformed text-only questions, Gemini 1.5 Pro consistently performed across both question formats. LLMs demonstrated the strongest performance in internal medicine, pediatrics, and psychiatry, with relatively weak results in medical law. Reproducibility was outstanding, with Claude 3.5 Sonnet, Gemini 1.5 Pro, and GPT-4o showing 99.9%, 99.5%, and 97.7%, respectively. Strong inter-model agreement was observed, particularly between GPT-4o and Claude 3.5 Sonnet. LLMs demonstrate competent performance in medical knowledge assessments, even in non-English contexts, although challenges persist in processing image-based questions and specialized domains. This study provides valuable insights that may inform the future development and application of LLMs in medical education and assessment, although further validation in real-world educational settings is necessary to establish their practical utility.