When Silence Signals Safety: Governance and Responsibility in AI-Enabled Prescription Verification

当沉默代表安全:人工智能处方验证中的治理与责任

阅读:2

Abstract

Artificial intelligence (AI) is increasingly utilized to enhance prescription verification by screening medication orders, prioritizing pharmacist review, and, in certain implementations, suppressing or deprioritizing alerts deemed low risk. While these systems may improve efficiency and the detection of prescribing risks, they also introduce challenges related to clinician reliance, accountability, and system oversight. This editorial argues that AI-enabled prescription verification may shift, in settings where algorithmic triage or alert suppression is relied upon, safety from an active clinical judgment to a passive inference based on algorithmic silence, redistributing rather than eliminating medication safety risk. As a result, safety work transitions from preventing individual errors to maintaining vigilance through continuous monitoring and governance. Key issues discussed include automation bias, data drift and dataset shift, distributed clinical responsibility, and the limitations of traditional validation approaches, such as one-time pre-implementation testing, reliance on static performance metrics, and periodic audits. Addressing these challenges requires governance frameworks that clarify accountability, uphold human judgment, and support ongoing evaluation of AI systems in clinical practice. By framing prescription verification as a socio-technical activity rather than solely a technical function, this editorial advances the discourse on the responsible integration of AI into medication safety workflows.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。