When Is It Safe to Introduce an AI System Into Healthcare? A Practical Decision Algorithm for the Ethical Implementation of Black-Box AI in Medicine

何时才能安全地将人工智能系统引入医疗保健领域?一种用于在医学领域合乎伦理地实施黑箱人工智能的实用决策算法

阅读:1

Abstract

There is mounting global interest in the revolutionary potential of AI tools. However, its use in healthcare carries certain risks. Some argue that opaque ('black box') AI systems in particular undermine patients' informed consent. While interpretable models offer an alternative, this approach may be impossible with generative AI and large language models (LLMs). Thus, we propose that AI tools should be evaluated for clinical use based on their implementation risk, rather than interpretability. We introduce a practical decision algorithm for the clinical implementation of black-box AI by evaluating its risk of implementation. Applied to the case of an LLM for surgical informed consent, we assess a system's implementation risk by evaluating: (1) technical robustness, (2) implementation feasibility and (3) analysis of harms and benefits. Accordingly, the system is categorised as minimal-risk (standard use), moderate-risk (innovative use) or high-risk (experimental use). Recommendations for implementation are proportional to risk, requiring more oversight for higher-risk categories. The algorithm also considers the system's cost-effectiveness and patients' informed consent.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。