Evaluation of DeepSeek-R1 and ChatGPT-4o on the Chinese national medical licensing examination: a multi-year comparative study

DeepSeek-R1 和 ChatGPT-4o 在中国国家医师资格考试中的应用评价:一项多年比较研究

阅读:2

Abstract

Large language models (LLMs) have demonstrated remarkable capabilities in natural language understanding and reasoning. However, their real-world applicability in high-stakes medical assessments remains underexplored, particularly in non-English contexts. This study aims to evaluate the performance of DeepSeek-R1 and ChatGPT-4o on the Chinese National Medical Licensing Examination (NMLE), a comprehensive benchmark of medical knowledge and clinical reasoning. We evaluated the performance of ChatGPT-4o and DeepSeek-R1 on the Chinese National Medical Licensing Examination (2019-2021) using question-level binary accuracy (correct = 1, incorrect = 0) as the outcome. A generalized linear mixed model (GLMM) with a binomial distribution and logit link was used to examine fixed effects of model type, year, and subject unit, including their interactions, while accounting for random intercepts across questions. Post hoc pairwise comparisons were conducted to assess differences across model-year interactions. DeepSeek-R1 significantly outperformed ChatGPT-4o overall (β = - 1.829, p < 0.001). Temporal analysis revealed a significant decline in ChatGPT-4o's accuracy from 2019 to 2021 (p < 0.05), whereas DeepSeek-R1 appeared to maintain a more stable performance. Subject-wise, Unit 3 showed the highest accuracy (β = 0.344, p = 0.001) compared to Unit 1. A significant interaction in 2020 (β = - 0.567, p = 0.009) indicated an amplified performance gap between the two models. These results highlight the importance of model selection and domain adaptation. Further investigation is needed to account for potential confounding factors, such as variations in question difficulty or language biases over time, which could also influence these trends. This longitudinal evaluation highlights the potential and limitations of LLMs in medical licensing contexts. While current models demonstrate promising results, further fine-tuning is necessary for clinical applicability. The NMLE offers a robust benchmark for future development of trustworthy AI-assisted medical decision support tools in non-English settings.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。