Abstract
OBJECTIVE: To examine user perceptions of human advisors versus chatbots in mental health apps and assess the impact of algorithm literacy interventions. METHODS: In an online experiment, participants engaged in simulated chats about digital stress. A within-subjects condition compared effects of human advisors and chatbots on perceived likeability, credibility, and social competence. A between-subjects condition tested information interventions (positive vs. negative vs. no evaluation of algorithms) aimed at increasing algorithm literacy. RESULTS: Participants rated human advisors more favorably than chatbots on all measured dimensions (p < .001). Information interventions designed to increase algorithm literacy did not change participants' attitudes toward chatbots. CONCLUSION: Users consistently preferred human advisors over chatbots for mental health advice. Brief algorithm literacy interventions had no significant effect on these preferences. This suggests that simply increasing users' understanding of chatbots is insufficient to enhance their acceptance in mental health settings, underscoring the need for more effective approaches to improve their adoption.