Mitigated deployment strategy for ethical AI in clinical settings

缓解临床环境中伦理人工智能部署问题的策略

阅读:1

Abstract

Clinical diagnostic tools can disadvantage subgroups due to poor model generalisability, which can be caused by unrepresentative training data. Practical deployment solutions to mitigate harm for subgroups from models with differential performance have yet to be established. This paper will build on existing work that considers a selective deployment approach where poorly performing subgroups are excluded from deployments. Alternatively, the proposed 'mitigated deployment' strategy requires safety nets to be built into clinical workflows to safeguard under-represented groups in a universal deployment. This approach relies on human-artificial intelligence collaboration and postmarket evaluation to continually improve model performance across subgroups with real-world data. Using a real-world case study, the benefits and limitations of mitigated deployment are explored. This will add to the tools available to healthcare organisations when considering how to safely deploy models with differential performance across subgroups.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。