Compliance with Clinical Guidelines and AI-Based Clinical Decision Support Systems: Implications for Ethics and Trust

遵守临床指南和基于人工智能的临床决策支持系统:对伦理和信任的影响

阅读:1

Abstract

Artificial intelligence (AI) is gradually transforming healthcare. However, despite its promised benefits, AI in healthcare also raises a number of ethical, legal and social concerns. Compliance by design (CbD) has been proposed as one way of addressing some of these concerns. In the context of healthcare, CbD efforts could focus on building compliance with existing clinical guidelines (CGs), given that they provide the best practices identified according to evidence-based medicine. In this paper we use the example of AI-based clinical decision support systems (CDSS) to theoretically examine whether medical AI tools could be designed to be inherently compliant with CGs, and implication for ethics and trust. We argue that AI-based CDSS systematically complying with CGs when applied to specific patient cases are not desirable, as CGs, despite their usefulness in guiding medical decision-making, are only recommendations on how to diagnose and treat medical conditions. We thus propose a new understanding of CbD for CGs as a sociotechnical program supported by AI that applies to the whole clinical decision-making process rather than just understanding CbD for CGs as a process located only within the AI tool. This implies taking into account emerging knowledge from actual clinical practices to put CGs in perspective, reflexivity from users regarding the information needed for decision-making, as well as a shift in the design culture, from AI as a stand-alone tool to AI as an in-situ service located within particular healthcare settings.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。