AI Chatbots in Answering Questions Related to Ocular Oncology: A Comparative Study Between DeepSeek v3, ChatGPT-4o, and Gemini 2.0

AI聊天机器人回答眼科肿瘤相关问题:DeepSeek v3、ChatGPT-4o和Gemini 2.0的比较研究

阅读:2

Abstract

Background Artificial intelligence (AI) chatbots are increasingly used in healthcare for information dissemination and clinical decision support. However, their reliability and applicability in subspecialties such as ocular oncology remain largely unassessed. This study aimed to evaluate the accuracy, completeness, readability, and real-world utility of three prominent AI chatbots, ChatGPT-4o (OpenAI, San Francisco, California, USA), DeepSeek v3 (DeepSeek, Hangzhou, Zhejiang, China), and Gemini 2.0 (Google DeepMind, London, UK), in responding to clinically relevant questions related to ocular malignancies. Methods A cross-sectional observational study was conducted at a tertiary eye care institute in Northern India. Five clinical questions, covering key ocular oncologic conditions, were created and standardized by ocular oncology experts. These prompts were input into ChatGPT-4o, DeepSeek v3, and Gemini 2.0. Responses were independently evaluated using a structured proforma assessing correctness, completeness, readability (Flesch-Kincaid score, word count, sentence count), presence of irrelevant data, applicability in the Indian healthcare setting, and reliability. Data were analyzed using Kruskal-Wallis and ANOVA statistical tests. Results All three chatbots demonstrated comparable correctness scores (mean 3.4, SD 0.49). However, four out of five responses from each chatbot were deemed incomplete. DeepSeek v3 provided the most verbose and readable answers (mean 533.8 words; Flesch score 38.0), while ChatGPT-4o generated the shortest but more clinically reliable responses (mean reliability 3.2). Gemini 2.0 exhibited the greatest variability in length and structure. No irrelevant content was observed in any chatbot responses. Only 2/5 responses from ChatGPT-4o and 1/5 from each of the other two were directly applicable to Indian clinical practice. Conclusion While AI chatbots can offer factually accurate responses to ocular oncology-related queries, they often fall short in completeness and clinical applicability. ChatGPT-4o showed the most balanced performance, though regional customization and expert oversight remain essential. Current models are not yet suitable for unsupervised use in high-stakes clinical scenarios.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。