Tubular Stress Markers and Future Risk of Sepsis-Associated Acute Kidney Injury

肾小管应激标志物与脓毒症相关急性肾损伤的未来风险

阅读:1

Abstract

BACKGROUND: Large language models (LLMs) have gained attention for their ability to exhibit human-like clinical reasoning with mock clinical cases. However, because of privacy concerns, few studies have evaluated their use in real-world healthcare settings. We aimed to assess the accuracy of LLMs in auditing blood culture appropriateness using real charts. [Figure: see text] [Figure: see text] METHODS: Stanford University deployed secure LLMs with direct access to electronic medical records. Using these, we developed two artificial intelligence (AI) agents—task-specific models designed to audit blood culture order appropriateness based on previously published criteria. We applied the agents to a random sample of 105 blood culture orders previously audited by an infectious diseases provider between May and December 2024. After excluding repeat orders within 48 hours, 67 unique cases remained (31 appropriate, 36 not). Each case included all assessment and plan notes from admission to blood culture collection (range: 1–500 notes). The initial reviewer agent (gpt-4o-mini; OpenAI) scanned the notes for any mention of appropriateness or non-appropriateness criteria. A second, more powerful double-checker agent (o1-mini; OpenAI) then reviewed and, if necessary, corrected the initial classification. [Figure: see text] [Figure: see text] RESULTS: Overall performance of the AI agents was modest, with a balanced accuracy of 0.568, sensitivity of 0.774, and specificity of 0.361. The agents frequently over-flagged blood culture orders as appropriate, demonstrating a tendency to recommend blood cultures in a broad range of cases. This likely reflects a known LLM behavior, sycophancy, where the model aligns with the reasoning presented in the clinical notes, such as agreeing with the care team’s suspicion of sepsis, even when objective criteria were not met. Notably, the “severe sepsis/septic shock” criterion was the most common justification given by the AI agents for classifying orders as appropriate. CONCLUSION: The AI agents demonstrated limited performance in adjudicating blood culture appropriateness. Their decisions were largely influenced by sycophantic bias and the presence of the word sepsis in the notes. Their utility in medical classification tasks may be best suited for initial screening rather than clinical recommendations. DISCLOSURES: All Authors: No reported disclosures

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。