AI performance in emergency medicine fellowship examination: comparative analysis of ChatGPT-4o, Gemini 2.0, Claude 3.5, and DeepSeek R1 models

AI 在急诊医学专科医师资格考试中的表现:ChatGPT-4o、Gemini 2.0、Claude 3.5 和 DeepSeek R1 模型的比较分析

阅读:1

Abstract

BACKGROUND/AIM: This study evaluated the accuracy rates and response consistency of four different large language models (ChatGPT-4o, Gemini 2.0, Claude 3.5, and DeepSeek R1) in answering questions from the Emergency Medicine Fellowship Examination (YDUS), which was administered for the first time in Türkiye. MATERIALS AND METHODS: In this observational study, 60 multiple-choice questions from the Emergency Medicine YDUS administered on 15 December 2024, were classified as knowledge-based (n = 26), visual content (n = 2), and case-based (n = 32). Each question was presented three times to the four large language models. The models' accuracy rates were evaluated according to overall accuracy, strict accuracy, and ideal accuracy criteria. Response consistency was measured using Fleiss' Kappa test. RESULTS: The ChatGPT-4o model was the most successful in terms of overall accuracy (90.0%), while DeepSeek R1 showed the lowest performance (76.7%). Claude 3.5 (83.3%) and Gemini 2.0 (80.0%) demonstrated moderate success. When analyzed by category, ChatGPT-4o achieved the highest success with 92.3% accuracy in knowledge-based questions and 90.6% in case-based questions. In terms of response consistency, the Claude 3.5 model (Fleiss' Kappa = 0.68) showed the highest consistency, while Gemini 2.0 (Fleiss' Kappa = 0.49) showed the lowest. Inconsistent hallucinations were more frequent in the Gemini 2.0 and DeepSeek R1 models, whereas persistent hallucinations were less common in the ChatGPT-4o and Claude 3.5 models. CONCLUSION: Large language models can achieve high accuracy rates for knowledge and clinical reasoning questions in emergency medicine but show differences in terms of response consistency and hallucination tendency. While these models have significant potential for use in medical education and as clinical decision support systems (CDSS), they need further development to provide reliable, up-to-date, and accurate information.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。