Abstract
Background/Objectives: Antimicrobial resistance (AMR) remains a major global health threat, strengthening the case for antimicrobial stewardship strategies that limit unnecessary broad-spectrum empiric therapy while preserving timely escalation when clinically warranted. Before any clinical deployment of large language model (LLM)-based antibiotic decision support can be considered, structured offline evaluation is needed to assess whether model outputs align with auditable stewardship constraints under real-world admission contexts. We therefore evaluated whether post hoc LLM-generated empiric antibiotic recommendations showed greater concordance with a pre-specified stewardship benchmarking framework than clinician-initiated regimens in a retrospective shadow-mode setting. Methods: Single-center retrospective paired evaluation at Clinical Emergency Hospital of Bucharest (Internal Medicine, 2020-2024). The unit of analysis was the admission (N = 493), with paired 24 h empiric regimens (clinician-prescribed vs. post hoc LLM-recommended via OpenAI API; not visible to clinicians; no influence on care). Local laboratory-derived epidemiology was precomputed from microbiology exports and provided as structured prompt context to approximate information parity with clinicians' implicit local ecology knowledge. Primary (prespecified) endpoint: any contextual guardrail violation (unjustified carbapenem/antipseudomonal/anti-MRSA under prespecified structured severity/MDR-risk rules), exact McNemar. Key secondary (prespecified): Δ contextual guardrail penalty (LLM - Clin), sign test and Wilcoxon signed-rank (ties reported). Ethics committee approval was obtained. Results: Guardrail violations occurred in 17.0% of clinician regimens vs. 4.9% of LLM regimens (paired RD -12.2%; matched OR 0.216, 95% CI 0.127-0.367; McNemar exact p = 1.60 × 10(-10)). Δ penalty had median 0 with 398/493 ties; among non-ties, improvements (Δ < 0) exceeded adverse shifts (79 vs. 16; sign-test p = 3.47 × 10(-11)). Conclusions: In this offline, non-interventional paired evaluation, LLM-generated empiric regimens showed greater concordance with a pre-specified stewardship benchmarking framework than clinician empiric regimens for the same admissions. These findings should not be interpreted as evidence of clinical superiority, patient safety, or causal effectiveness, but rather as process-level benchmarking within a rule-based stewardship construct. As such, reproducible guardrail-based benchmarking may serve as an early pre-implementation step to identify alignment and potential failure modes before prospective, safety-governed evaluation.