Comparing Artificial Intelligence and Obstetrics Residents in Answering Standardized Patient Questions Regarding Gestational Diabetes

比较人工智能和妇产科住院医师在回答有关妊娠糖尿病的标准化病人问题方面的表现

阅读:1

Abstract

Introduction This study evaluated the performance of three artificial intelligence (AI) chatbots (GPT-3.5 (OpenAI, San Francisco, USA), GPT-4o (OpenAI, San Francisco, USA), and DeepSeek V3 0324 (DeepSeek AI, Beijing, China)) compared to eight gynecology residents in answering questions related to gestational diabetes mellitus (GDM), aiming to assess and compare the accuracy and completeness of responses to standardized patient questions on gestational diabetes in pregnancy. Methods Twenty-four questions were answered by three chatbots (GPT-3.5, GPT-4o, and DeepSeek V3 0324) and eight residents. Two faculty members independently rated the responses for accuracy and completeness using a 5-point scale. Independent-samples t-tests were used for statistical analysis. Results The mean accuracy scores were 3.64 for residents, 4.67 for GPT-3.5, 4.69 for GPT-4o, and 4.81 for DeepSeek V3 0324. The mean completeness scores were 2.05 for residents, 2.83 for GPT-3.5, 4.00 for GPT-4o, and 4.75 for DeepSeek V3 0324. T-tests showed that all AI models had significantly higher accuracy than residents (p < 0.001). Completeness scores were significantly higher for GPT-4o and DeepSeek V3 0324 (p < 0.001), while the difference between GPT-3.5 and residents for completeness was not statistically significant (p = 0.058). Conclusion AI models, particularly DeepSeek V3 0324 and GPT-4o, outperformed gynecology residents in both accuracy and completeness when answering GDM-related questions. These preliminary findings suggest that AI tools may complement medical education and clinical support, but further research is required before broader implementation.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。