Comparative analysis of six large language models in perioperative decision support for geriatric patients with multimorbidity: a three-dimensional evaluation framework

针对老年多病患者围手术期决策支持的六种大型语言模型的比较分析:三维评估框架

阅读:1

Abstract

BACKGROUND: While large language models (LLMs) show promise in healthcare, their reliability in high-stakes perioperative management for elderly patients with multimorbidity remains critically underexplored. METHODS: This benchmarking study evaluated five general-purpose LLMs (ChatGPT, Gemini, DeepSeek, Claude, Kimi) and one domain-optimized model (New Youth Anesthesia Artificial Intelligence Assistant, NYAAI) using a novel three-dimensional framework assessing guideline compliance, clinical applicability, and safety redundancy. A simulated case of an 84-year-old male with femoral fracture and multimorbidity was developed. Two blinded anesthesiologists scored anonymized outputs via a 5-point Likert scale. Additionally, to account for the rapid evolution of AI models, a supplementary analysis was conducted to evaluate the robustness and sensitivity of current model versions. RESULTS: NYAAI achieved the highest total score (12/15), excelling in clinical applicability (5/5) through domain-optimized parameterization. However, it exhibited selective guideline adherence, omitting temperature management and delirium protocols. General-purpose models demonstrated moderate guideline compliance (ChatGPT:4/5; Gemini:3/5) but generated contextually inappropriate recommendations. Safety redundancy emerged as a universal failure—no model addressed extreme-event protocols (aortic rupture management). CONCLUSION: This study evaluated six LLMs for perioperative decision support in a geriatric patient with multimorbidity. The findings confirm that LLMs are useful as structured protocol generators, but they are not sufficient as autonomous clinical agents. Domain-optimized models enhance operational feasibility, yet they heighten the tension between safety redundancy and contextual adaptability. General-purpose models, despite their broad knowledge, are prone to generating inaccurate or hallucinations. To maximize efficiency without jeopardizing medical safety, LLMs should be positioned as as extensions of expert systems rather than independent decision-makers. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12871-025-03605-x.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。