Empirically derived evaluation requirements for responsible deployments of AI in safety-critical settings

在安全关键环境中负责任地部署人工智能的经验性评估要求

阅读:1

Abstract

Processes to assure the safe, effective, and responsible deployment of artificial intelligence (AI) in safety-critical settings are urgently needed. Here we show a procedure to empirically evaluate the impacts of AI augmentation as a basis for responsible deployment. We evaluated three augmentative AI technologies nurses used to recognize imminent patient emergencies, including combinations of AI recommendations and explanations. The evaluation involved 450 nursing students and 12 licensed nurses assessing 10 historical patient cases. With each technology, nurses' performance was both improved and degraded when the AI algorithm was most correct and misleading, respectively. Our findings caution that AI capabilities alone do not guarantee a safe and effective joint human-AI system. We propose two minimum requirements for evaluating AI in safety-critical settings: (1) empirically measure the performance of people and AI together and (2) examine a range of challenging cases which produce a range of strong, mediocre, and poor AI performance.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。