Abstract
BACKGROUND: Large language models (LLMs) have gained attention for their ability to exhibit human-like clinical reasoning with mock clinical cases. However, because of privacy concerns, few studies have evaluated their use in real-world healthcare settings. We aimed to assess the accuracy of LLMs in auditing blood culture appropriateness using real charts. [Figure: see text] [Figure: see text] METHODS: Stanford University deployed secure LLMs with direct access to electronic medical records. Using these, we developed two artificial intelligence (AI) agents—task-specific models designed to audit blood culture order appropriateness based on previously published criteria. We applied the agents to a random sample of 105 blood culture orders previously audited by an infectious diseases provider between May and December 2024. After excluding repeat orders within 48 hours, 67 unique cases remained (31 appropriate, 36 not). Each case included all assessment and plan notes from admission to blood culture collection (range: 1–500 notes). The initial reviewer agent (gpt-4o-mini; OpenAI) scanned the notes for any mention of appropriateness or non-appropriateness criteria. A second, more powerful double-checker agent (o1-mini; OpenAI) then reviewed and, if necessary, corrected the initial classification. [Figure: see text] [Figure: see text] RESULTS: Overall performance of the AI agents was modest, with a balanced accuracy of 0.568, sensitivity of 0.774, and specificity of 0.361. The agents frequently over-flagged blood culture orders as appropriate, demonstrating a tendency to recommend blood cultures in a broad range of cases. This likely reflects a known LLM behavior, sycophancy, where the model aligns with the reasoning presented in the clinical notes, such as agreeing with the care team’s suspicion of sepsis, even when objective criteria were not met. Notably, the “severe sepsis/septic shock” criterion was the most common justification given by the AI agents for classifying orders as appropriate. CONCLUSION: The AI agents demonstrated limited performance in adjudicating blood culture appropriateness. Their decisions were largely influenced by sycophantic bias and the presence of the word sepsis in the notes. Their utility in medical classification tasks may be best suited for initial screening rather than clinical recommendations. DISCLOSURES: All Authors: No reported disclosures