Abstract
PURPOSE/SIGNIFICANCE: To optimize smart healthcare services and advance the sustainable deployment of AI in medical triage, this study investigates differences in user trust between AI and human medical triage doctors and the underlying psychological mechanisms. METHODS/PROCEDURES: Four online experiments were conducted using a between-group design to systematically manipulate the medical triage doctors (AI vs. human), degree of anthropomorphism (high vs. low), task sensitivity (high vs. low), and AI technology adoption level (high vs. low). Participants were recruited online to view medical triage engagement screenshots, and respond to measures assessing perceived psychological distance, anthropomorphism, task sensitivity, AI technology adoption level, and user trust. Process macros were used to test the mediation and moderation effects. RESULTS/CONCLUSIONS: The study found that (1) participants placed greater trust in human than in AI medical triage doctors; (2) psychological distance played a partial mediating role; (3) a high degree of anthropomorphism effectively reduced the psychological distance between participants and AI medical triage doctors; (4) in low task sensitivity scenarios, there was no significant difference in psychological distance from high-anthropomorphism AI and human medical triage doctors, and both were perceived as closer than low-anthropomorphism AI medical triage doctors. In high task sensitivity scenarios, psychological distance was closest for human medical triage doctors, followed by high-anthropomorphism AI medical triage doctors, and farthest for low-anthropomorphism AI medical triage doctors; and (5) high AI technology adoption level diminished the trust disparity between AI and human medical triage doctors; however, participants still exhibited a higher level of trust in human medical triage doctors. These results emphasize the importance of considering psychological distance in AI healthcare trust research, revealing the task reliance of anthropomorphism. The study also develops a comprehensive trust model that incorporates various moderating influences.