Analysis of AI-Generated Patient Education Guides for Urological Conditions: A Comparative Study Between ChatGPT and Gemini

人工智能生成的泌尿系统疾病患者教育指南分析:ChatGPT 与 Gemini 的比较研究

阅读:2

Abstract

Introduction Artificial intelligence (AI) chatbots are increasingly being used to create patient education guides (PEGs). However, there are gaps in the literature comparing the latest version in terms of readability, reliability, and similarity. The aim of this study was to compare PEGs generated by ChatGPT 5.1 (OpenAI, San Francisco, California, US) and Gemini 3 Pro (Google LLC, Mountain View, CA, USA) for five common urological conditions, kidney stone, urinary tract infection, urinary retention, erectile dysfunction, and benign prostatic hyperplasia, across these domains.  Methods This cross-sectional study analysed PEGs generated by both AI chatbots for five common urological conditions using identical prompts. Readability was assessed using the Flesch Reading Ease Score and Flesch-Kincaid Grade Level. Reliability and similarity were assessed using a modified DISCERN score and Turnitin, respectively. Statistical comparison was performed using the Mann-Whitney U test. Results None of the evaluated characteristics showed a statistically significant difference between the PEGs generated by AI chatbots.  Conclusion PEGs generated by both AI chatbots exceeded the recommended reading level, demonstrated limited originality, and showed moderate reliability, highlighting the need for professional oversight. Continued refinement of AI chatbots is necessary before integrating AI-generated PEGs into routine patient education.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。