Abstract
BACKGROUND: The prevalence of anxiety and depression is increasing globally, outpacing the capacity of traditional mental health services. Digital mental health interventions (DMHIs) provide a cost-effective alternative, but user engagement remains limited. Integrating artificial intelligence (AI)-powered conversational agents may enhance engagement and improve the user experience; however, with AI technology rapidly evolving, the acceptability of these solutions remains uncertain. OBJECTIVE: This study aims to examine the acceptability, engagement, and usability of a conversational agent-led DMHI with human support for generalized anxiety by exploring patient expectations and experiences through a mixed methods approach. METHODS: Participants (N=299) were offered a DMHI for up to 9 weeks and completed postintervention self-report measures of engagement (User Engagement Scale [UES]; n=190), usability (System Usability Scale [SUS]; n=203), and acceptability (Service User Technology Acceptability Questionnaire [SUTAQ]; n=203). To explore expectations and experiences with the program, a subsample of participants completed qualitative semistructured interviews before the intervention (n=21) and after the intervention (n=16), which were analyzed using inductive thematic analysis. RESULTS: Participants rated the digital program as engaging (mean UES total score 3.7; 95% CI 3.5-3.8), rewarding (mean UES rewarding subscale 4.1; 95% CI 4.0-4.2), and easy to use (mean SUS total score 78.6; 95% CI 76.5-80.7). They were satisfied with the program and reported that it increased access to and enhanced their care (mean SUTAQ subscales 4.3-4.9; 95% CI 4.1-5.1). Insights from pre- and postintervention qualitative interviews highlighted 5 themes representing user needs important for acceptability: (1) accessible mental health support, in terms of availability and emotional approachability (Accessible Care); (2) practical and effective solutions leading to tangible improvements (Effective Solutions); (3) a personalized and tailored experience (Personal Experience); (4) guidance within a clear structure, while retaining control (Guided but in Control); and (5) a sense of support facilitated by human involvement (Feeling Supported). Overall, the DMHI met participant expectations, except for theme 3, as participants desired greater personalization and reported frustration when the conversational agent misunderstood them. CONCLUSIONS: Incorporating factors critical to patient acceptability into DMHIs is essential to maximize their global impact on mental health care. This study provides both quantitative and qualitative evidence for the acceptability of a structured, conversational agent-driven digital program with human support for adults experiencing generalized anxiety. The findings highlight the importance of design, clinical, and implementation factors in enhancing engagement and reveal opportunities for ongoing optimization and innovation. Scalable models with stratified human support and the safe integration of generative AI have the potential to transform patient experience and increase the real-world impact of conversational agent-led DMHIs. TRIAL REGISTRATION: ISRCTN Registry ISRCTN 52546704; https://www.isrctn.com/ISRCTN52546704.