A Cross-Sectional Study Comparing Patient Education Guides Created by ChatGPT and Google Gemini for Common Cardiovascular-Related Conditions

一项横断面研究比较了 ChatGPT 和 Google Gemini 为常见心血管相关疾病创建的患者教育指南

阅读:1

Abstract

Introduction  Obesity, hypertension, and hypertriglyceridemia are key components of metabolic syndrome, a major contributor to cardiovascular diseases (CVDs), which remain a leading cause of global mortality. Patient education on these conditions can empower individuals to adopt preventive measures and manage risks effectively. This study compares ChatGPT and Google Gemini, two prominent artificial intelligence (AI) tools, to evaluate their utility in creating patient education guides. ChatGPT is known for its conversational depth, while Google Gemini emphasizes advanced natural language processing. By analyzing readability, reliability, and content characteristics, the study highlights how these AI tools cater to diverse patient needs, aiming to enhance health literacy outcomes. Methodology  A cross-sectional study evaluated patient education guides on obesity, hypertension, and hypertriglyceridemia, focusing on their links to metabolic syndrome. Responses from ChatGPT and Google Gemini were analyzed for word count, sentence count, readability (using the Flesch-Kincaid calculator), similarity score (using Quillbot), and reliability score (using the modified DISCERN score), with statistical analyses performed using the R Version 4.3.2. Results  Statistical analysis revealed a significant difference in word and sentence counts between the AI tools: ChatGPT averaged 591.50 words and 66 sentences, while Google Gemini had 351.50 words and 36 sentences (p = 0.001 and p < 0.0001). However, the average words per sentence, average syllables per word, grade level, similarity percentage, and reliability scores did not differ significantly. Although Google Gemini had a higher ease score (41.75) compared to ChatGPT (34.10), this difference was not statistically significant (p = 0.080). Both tools exhibited similar readability and reliability, indicating their effectiveness for patient education, despite ChatGPT providing longer responses. Conclusion  The study found no significant difference between the two AI tools in terms of ease, grade, and reliability scores, with no correlation between ease and reliability scores.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。