Abstract
As agents such as AI systems and robots increasingly support human decision-making, questions of accountability in cases of failure have become critical. Prior research has examined responsibility attribution mainly in terms of system autonomy, transparency, or anthropomorphism, but little is known about how cognitive framing (prior knowledge of agents) and contextual framing (perceived importance of a task) jointly shape these judgments. This study addresses this gap through a three-factor mixed-design experiment with 588 participants. Participants evaluated responsibility for the user, the agent, and the agent's developer or provider after observing failed agent-assisted interactions. The results showed that prior knowledge of the agent shifted responsibility away from the user and toward the agent and its developer. Moreover, when the topic was perceived as highly important, responsibility attributed to the developer or provider increased substantially. These findings highlight that responsibility attribution in human-agent interaction is dynamic rather than static, modulated by both user expectations and situational seriousness. Beyond theoretical contribution, the results suggest practical implications for system design, user education, and legal policy, offering guidance on how to reduce accountability gaps in the deployment of socially embedded agents.