Addressing Commonly Asked Questions in Urogynecology: Accuracy and Limitations of ChatGPT

解答泌尿妇科常见问题:ChatGPT 的准确性和局限性

阅读:2

Abstract

INTRODUCTION AND HYPOTHESIS: Existing literature suggests that large language models such as Chat Generative Pre-training Transformer (ChatGPT) might provide inaccurate and unreliable health care information. The literature regarding its performance in urogynecology is scarce. The aim of the present study is to assess ChatGPT's ability to accurately answer commonly asked urogynecology patient questions. METHODS: An expert panel of five board certified urogynecologists and two fellows developed ten commonly asked patient questions in a urogynecology office. Questions were phrased using diction and verbiage that a patient may use when asking a question over the internet. ChatGPT responses were evaluated using the Brief DISCERN (BD) tool, a validated scoring system for online health care information. Scores ≥ 16 are consistent with good-quality content. Responses were graded based on their accuracy and consistency with expert opinion and published guidelines. RESULTS: The average score across all ten questions was 18.9 ± 2.7. Nine out of ten (90%) questions had a response that was determined to be of good quality (BD ≥ 16). The lowest scoring topic was "Pelvic Organ Prolapse" (mean BD = 14.0 ± 2.0). The highest scoring topic was "Interstitial Cystitis" (mean BD = 22.0 ± 0). ChatGPT provided no references for its responses. CONCLUSIONS: ChatGPT provided high-quality responses to 90% of the questions based on an expert panel's review with the BD tool. Nonetheless, given the evolving nature of this technology, continued analysis is crucial before ChatGPT can be accepted as accurate and reliable.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。