Chatbots versus retina specialists in answering real-world retina questions

聊天机器人与视网膜专家在回答现实世界中的视网膜问题方面有何不同?

阅读:1

Abstract

BACKGROUND: Chatbots powered by large language models have shown promising results when addressing medical queries, thus transforming access to medical information. Although these chatbots produce detailed and accurate responses, it is unclear how they perform when handling real, unedited patient questions, particularly in non-English languages. This study aimed to assess the readability, accuracy, and comprehensiveness of responses to retinal disease queries provided by four chatbots (ChatGPT 4.0, ConsensusGPT, Gemini, and Claude 3) compared to responses from retina specialists. METHODS: In this cross-sectional, comparative, and blinded study, twenty unedited questions about retinal diseases were randomly selected from a popular online video channel in Portuguese. The questions were submitted to the four selected chatbots and retina specialists with fellowship training. Two independent retinal experts evaluated the responses using standardized Likert scales for accuracy and completeness. Readability was assessed using the Flesch Reading Ease Score and the Flesch-Kincaid Grade Level tests. Additional metrics, including word count and response generation time, were analyzed. Data were compared among groups using non-parametric statistical tests, including the Kruskal-Wallis test with Dunn's pairwise comparisons and chi-squared tests, with a two-sided p-value threshold of 0.05 for statistical significance. RESULTS: Retinal specialists and the Gemini chatbot produced responses with higher readability, indicating lower educational levels were needed for comprehension. In contrast, ChatGPT 4.0, ConsensusGPT, and Claude 3 delivered more detailed and accurate answers but required a higher reading level. ChatGPT 4.0 and ConsensusGPT achieved superior quality and comprehensiveness ratings compared to human experts and the other chatbots. Additionally, all AI systems generated responses significantly faster than the human specialists. Evaluators could correctly distinguish between human-generated and AI-generated responses in most cases. CONCLUSIONS: Artificial intelligence chatbots demonstrate considerable promise for rapidly disseminating accurate medical information directly to the end-user, not only in English. However, optimizing the simplicity of their language is essential to ensure that detailed responses remain accessible to a broad audience. Future research should aim to replicate our findings with larger datasets of questions with the goal of refining these systems to balance comprehensive content with user-friendly language.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。