Comparative analysis of artificial intelligence chatbots in orthodontic emergency scenarios: ChatGPT-3.5, ChatGPT-4.0, Copilot, and Gemini

正畸急诊场景下人工智能聊天机器人的比较分析:ChatGPT-3.5、ChatGPT-4.0、Copilot 和 Gemini

阅读:1

Abstract

OBJECTIVES: To evaluate and compare the accuracy of four AI chatbots, ChatGPT-3.5, ChatGPT-4.0, Copilot, and Gemini, in response to orthodontic emergency scenarios. MATERIALS AND METHODS: Forty frequently asked questions related to orthodontic emergencies were posed to the chatbots. These questions were categorized as fixed orthodontic treatment, clear aligner treatment, eating and oral hygiene, pain and discomfort, general concerns, retention, and sports and travel. The responses were evaluated by three orthodontic experts using a five-point Likert scale, and statistical analysis was conducted to assess variations in accuracy across chatbots. RESULTS: Statistical analysis revealed significant differences among the chatbots. Gemini and ChatGPT-4.0 demonstrated the highest accuracy in response to orthodontic emergencies, followed by Copilot, whereas ChatGPT-3.5 had the lowest accuracy scores. Additionally, the "Fixed Orthodontic Treatment" category showed a statistically significant difference (P = .043), with Gemini outperforming the other chatbots in this category. However, no statistically significant differences were found in other categories. CONCLUSIONS: AI chatbots show potential in providing immediate assistance for orthodontic emergencies, but their accuracy varies across different models and question categories.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。