Assessing ChatGPT's Diagnostic Accuracy and Therapeutic Strategies in Oral Pathologies: A Cross-Sectional Study

评估 ChatGPT 在口腔病理诊断准确性和治疗策略方面的效果:一项横断面研究

阅读:1

Abstract

BACKGROUND: The rapid adoption of artificial intelligence (AI) models in the medical field is due to their ability to collaborate with clinicians in the diagnosis and management of a wide range of conditions. This research assesses the diagnostic accuracy and therapeutic strategies of Chat Generative Pre-trained Transformer (ChatGPT) in comparison to dental professionals across 12 clinical cases. METHODOLOGY: ChatGPT 3.5 was queried for diagnoses and management plans for 12 retrospective cases. Physicians were tasked with rating the complexity of clinical scenarios and their agreement with the ChatGPT responses using a five-point Likert scale. Comparisons were made between the complexity of the cases and the accuracy of the diagnoses and treatment plans. RESULTS: ChatGPT exhibited high accuracy in providing differential diagnoses and acceptable treatment plans. In a survey involving 30 attending physicians, scenarios were rated with an overall median difficulty level of 3, showing acceptable agreement with ChatGPT's differential diagnosis accuracy (overall median 4). Our study revealed lower diagnosis scores correlating with decreased treatment management scores, as demonstrated by univariate ordinal regression analysis. CONCLUSIONS: ChatGPT's rapid processing aids healthcare by offering an objective, evidence-based approach, reducing human error and workload. However, potential biases may affect outcomes and challenge less-experienced practitioners. AI in healthcare, including ChatGPT, is still evolving, and further research is needed to understand its full potential in analyzing clinical information, establishing diagnoses, and suggesting treatments.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。