Artificial intelligence in the prescription of acute medical treatments in primary healthcare - comparison of the performance of family physicians and ChatGPT

人工智能在基层医疗保健中急性医疗治疗处方中的应用——家庭医生与 ChatGPT 的表现比较

阅读:3

Abstract

INTRODUCTION: Artificial intelligence (AI) is increasingly being recognized as a transformative force in healthcare, showing significant promise in supporting healthcare professionals. AI has many applications in healthcare, including providing real-time decision support, diagnosing diseases, and advancing personalized medicine. However, clinical trials and further research are needed to evaluate the practical effectiveness of AI in primary healthcare. OBJECTIVE OF THE STUDY: This study aims to assess the accuracy of ChatGPT, an AI-powered chatbot, in therapeutic decision-making during acute disease consultations in primary care and compare its performance to that of general family physicians. The goal was to determine how well ChatGPT could replicate the decisions made by physicians based on standard clinical guidelines. MATERIALS AND METHODS: A cross-sectional study was conducted at three primary healthcare units in the Central Region of Portugal. The analysis involved three phases: (1) collecting data from healthcare professionals, (2) gathering therapeutic proposals from ChatGPT v3.5 based on physician-defined diagnoses, and (3) comparing the treatments proposed by both ChatGPT v3.5 and the physicians, using the Dynamed platform as the gold standard for correct prescriptions. RESULTS: Out of a total of 860 consultations, 138 were excluded due to non-compliance with the inclusion criteria. The analysis showed that the diagnostic accuracy of ChatGPT v3.5 and physicians co-occurred in 26.2% of cases. In 29.1% of cases, there was no agreement between the AI and the physicians' diagnoses. The therapeutic decisions made by ChatGPT v3.5 were correct in 55.6% of the cases, while physicians made correct decisions in 54.3% of the cases. The therapeutic decisions of ChatGPT v3.5 were incorrect in 5.2% of the cases, compared to 11% for physicians. Furthermore, the therapeutic proposals of ChatGPT v3.5 were 'approximate' to the correct treatment in 24% of the cases, while physicians had a 17.1% approximation rate. CONCLUSION: This study suggests that AI - specifically ChatGPT v3.5 - can match or even outperform physicians in terms of therapeutic decision accuracy, with a similar or slightly better success rate than human doctors. This highlights the potential for AI to act as an effective auxiliary tool rather than a replacement for healthcare professionals. AI is most effective when used in collaboration with healthcare professionals, augmenting their capabilities and improving overall healthcare delivery. Ultimately, AI can serve as a powerful aid to healthcare professionals, helping improve patient care and healthcare outcomes, particularly in primary care.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。