Large Language Model Recommendations for Empiric Antibiotics Versus Clinician Prescribing: A Non-Interventional Paired Retrospective Antimicrobial Stewardship Analysis

基于大型语言模型对经验性抗生素使用建议与临床医生处方行为的比较:一项非干预性配对回顾性抗菌药物管理分析

阅读:1

Abstract

Background/Objectives: Antimicrobial resistance (AMR) remains a major global health threat, strengthening the case for antimicrobial stewardship strategies that limit unnecessary broad-spectrum empiric therapy while preserving timely escalation when clinically warranted. Before any clinical deployment of large language model (LLM)-based antibiotic decision support can be considered, structured offline evaluation is needed to assess whether model outputs align with auditable stewardship constraints under real-world admission contexts. We therefore evaluated whether post hoc LLM-generated empiric antibiotic recommendations showed greater concordance with a pre-specified stewardship benchmarking framework than clinician-initiated regimens in a retrospective shadow-mode setting. Methods: Single-center retrospective paired evaluation at Clinical Emergency Hospital of Bucharest (Internal Medicine, 2020-2024). The unit of analysis was the admission (N = 493), with paired 24 h empiric regimens (clinician-prescribed vs. post hoc LLM-recommended via OpenAI API; not visible to clinicians; no influence on care). Local laboratory-derived epidemiology was precomputed from microbiology exports and provided as structured prompt context to approximate information parity with clinicians' implicit local ecology knowledge. Primary (prespecified) endpoint: any contextual guardrail violation (unjustified carbapenem/antipseudomonal/anti-MRSA under prespecified structured severity/MDR-risk rules), exact McNemar. Key secondary (prespecified): Δ contextual guardrail penalty (LLM - Clin), sign test and Wilcoxon signed-rank (ties reported). Ethics committee approval was obtained. Results: Guardrail violations occurred in 17.0% of clinician regimens vs. 4.9% of LLM regimens (paired RD -12.2%; matched OR 0.216, 95% CI 0.127-0.367; McNemar exact p = 1.60 × 10(-10)). Δ penalty had median 0 with 398/493 ties; among non-ties, improvements (Δ < 0) exceeded adverse shifts (79 vs. 16; sign-test p = 3.47 × 10(-11)). Conclusions: In this offline, non-interventional paired evaluation, LLM-generated empiric regimens showed greater concordance with a pre-specified stewardship benchmarking framework than clinician empiric regimens for the same admissions. These findings should not be interpreted as evidence of clinical superiority, patient safety, or causal effectiveness, but rather as process-level benchmarking within a rule-based stewardship construct. As such, reproducible guardrail-based benchmarking may serve as an early pre-implementation step to identify alignment and potential failure modes before prospective, safety-governed evaluation.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。