Multi-metric comparative evaluation of DeepSeek and ChatGPT in USMLE versus CNMLE for medical education

对DeepSeek和ChatGPT在USMLE和CNMLE医学教育中的应用进行多指标比较评估

阅读:1

Abstract

Large language models (LLMs) like ChatGPT and DeepSeek are gaining attention for their potential in medical education. This study aims to evaluate the performance of ChatGPT and DeepSeek in the United States Medical Licensing Examination (USMLE) and the Chinese National Medical Licensing Examination (CNMLE), followed by the targeted optimizations methods to advance the efficient and effective application of LLMs in medical education. This study conducted a comparative quantitative analysis across multiple dimensions, including answer accuracy, consistency, the number of reasoning characters, and runtime.Based on the identified limitations of LLMs, targeted optimization explorations were carried out, including the construction of a technical safeguard framework and a multi-dimensional evaluation system. In the USMLE, DeepSeek had an average accuracy of 92.59% and a Fleiss’ Kappa of 0.96, while ChatGPT had 90.26% accuracy and a Fleiss’ Kappa of 0.93. In the CNMLE, DeepSeek achieved an accuracy of 86.78% and a Fleiss’ Kappa of 0.96, while ChatGPT had an accuracy of 79.44% and a Fleiss’ Kappa of 0.90. Both DeepSeek and ChatGPT demonstrated the ability to identify flawed questions, yet they also produced incorrect answers due to hallucinations. Additionally, DeepSeek had a relatively longer runtime. To address these issues, this study proposed a Knowledge Graph-Based RAG Fact-Checking Framework centered on evidence anchoring and a multi-dimensional evaluation system focusing on reliability and safety. DeepSeek generally outperforms ChatGPT in accuracy, particularly excelling in handling complex medical problems and Chinese medical knowledge. However, DeepSeek had a longer runtime compared with ChatGPT. The proposed optimization framework and evaluation system effectively address core issues such as LLM hallucinations, clarifying the positioning of LLMs as “auxiliary tools” that require rigorous fact-checking. These solutions jointly form a core governance system for the application of LLM in medical education, providing key support for their precise and efficient integration into educational scenarios. The study indicates that LLMs are expected to bring about a progressive transformation, evolving from functional enhancement to paradigm reconstruction.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。