Toward a science of human-AI teaming for decision making: A complementarity framework

迈向人机协作决策科学:互补性框架

阅读:2

Abstract

As artificial intelligence (AI) becomes embedded in critical decisions involving health, safety, finance, and governance, the key challenge is no longer whether humans and AI will collaborate, but rather how to structure this collaboration to achieve true complementarity. Human-AI complementarity refers to the conditions under which human-AI teams outperform either humans alone or AI systems alone. This paper advances the science of human-AI teaming for decision making by integrating insights from cognitive science, AI, human factors, organizational behavior, and ethics. We propose a framework grounded in collective intelligence and anchored in the foundational cognitive processes-reasoning, memory, and attention-to understand and engineer effective human-AI teams. We examine the sociotechnical factors that shape team effectiveness, including team composition, trust calibration, shared mental models, training, and task structure. We then outline design principles for achieving complementarity: defining goals and constraints, partitioning roles, orchestrating attention and interrogation, building knowledge infrastructures, and establishing continuous training and evaluation. We conclude with theoretical, practical, and policy implications, emphasizing alignment with human values, accountability, and equity. Together, these insights offer a roadmap for building human-AI teams that are not only high-performing and adaptive, but also transparent, trustworthy, and fundamentally human-centered.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。