Assessing ChatGPT-4 as a clinical decision support tool in neuro-oncology radiotherapy: a prospective comparative study

评估 ChatGPT-4 作为神经肿瘤放射治疗临床决策支持工具的有效性:一项前瞻性比较研究

阅读:1

Abstract

BACKGROUND AND PURPOSE: Large language models (LLMs) such as ChatGPT-4 have shown potential for medical decision support, but their reliability in specialized fields remains uncertain. This study aimed to evaluate ChatGPT-4’s performance as a clinical decision support tool in neuro-oncology radiotherapy by comparing its treatment recommendations for patients with central nervous system tumors against a multidisciplinary tumor board’s decisions, an independent specialist’s opinion, and published guidelines. MATERIALS AND METHODS: We prospectively collected 101 neuro-oncology cases (May 2024–May 2025) presented at a tertiary-care tumor board. Key case details were entered into ChatGPT-4 with a standardized query asking whether to recommend radiotherapy and, if so, the target volumes and dose. The AI’s recommendations were recorded and compared to the tumor board’s consensus, a blinded radiation oncologist’s recommendation, and ESMO guideline indications when applicable. Concordance rates (percentage agreement) and Cohen’s kappa were calculated. Sensitivity and specificity were assessed using the reference decisions as ground truth. McNemar’s test was used to evaluate any bias in discordant recommendations. RESULTS: ChatGPT-4 matched the tumor board’s radiotherapy recommendations in 76% of cases (κ = 0.61). Agreement with the independent specialist was 79% (κ = 0.58). In 61 low-complexity cases with clear guidelines, ChatGPT-4 concurred with guideline-based indications in 76.7% of cases, missing some recommended treatments (sensitivity 73%, specificity 100%). In intermediate-complexity scenarios, concordance with the tumor board was 70.8%, with most discrepancies due to the AI recommending treatment that experts did not (sensitivity 85.7%, specificity 64.7%). In high-complexity cases, agreement was 90.9% (sensitivity 100%, specificity 83.3%). Overall, ChatGPT-4 showed an overtreatment bias, more often recommending radiotherapy when the human experts chose observation (p < 0.05 for AI vs. tumor board discordances). Its overall agreement (76%) was lower than that of the human specialist (90%). CONCLUSION: ChatGPT-4 can reproduce many expert radiotherapy decisions in neuro-oncology, reflecting substantial absorption of standard clinical practice. However, it cannot substitute for human judgment: the AI omitted some indicated treatments in straightforward cases and suggested unnecessary therapy in some borderline cases, indicating a lack of nuanced clinical reasoning. Careful human oversight is essential if such models are to be used for clinical decision support.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。