Abstract
INTRODUCTION: As artificial intelligence (AI) technologies become increasingly integrated into organizational teamwork, managing communication breakdowns in human-AI collaboration has emerged as a significant managerial challenge. Although AI-empowered teams often achieve enhanced efficiency, misunderstandings-especially those caused by AI agents during information exchange-can undermine team trust and impair performance. The mechanisms underlying these effects remain insufficiently explored. METHODS: Grounded in evolutionary psychology and trust theory, this study employed a 2 (team type: human-AI vs. human-human) × 2 (misunderstanding type: information omission vs. ambiguous expression) experimental design. A total of 126 valid participants were assigned to collaboratively complete a planning and writing task for a popular science social media column with their respective teammates. RESULTS: The findings indicate that information omissions caused by AI agents significantly reduce team trust, which in turn hinders communication efficiency and overall performance. Conversely, the negative impact of ambiguous expressions is moderated by the level of team trust; teams with higher trust demonstrate greater adaptability and resilience. Moderated mediation analyses further reveal that team type influences the dynamic pathway from misunderstanding to trust and performance. DISCUSSION: This research advances theoretical understanding of misunderstanding management in human-AI teams and provides practical insights for optimizing AI systems and fostering effective human-machine collaboration.