AI Chatbots for Mental Health Self-Management: Lived Experience-Centered Qualitative Study

用于心理健康自我管理的AI聊天机器人:以生活体验为中心的定性研究

阅读:2

Abstract

BACKGROUND: Large language models (LLMs) now enable chatbots to engage in sensitive mental health conversations, including depression self-management. Yet their rapid deployment often overlooks how well these tools align with the priorities of people with lived experiences, which can introduce harms such as inaccurate information, lack of empathy, or inadequate crisis support. OBJECTIVE: This study explores how people with lived experience of depression experience an LLM-based mental health chatbot in self-management contexts, and what perceived benefits, limitations, and concerns inform harm-mitigating design implications. METHODS: We developed a technology probe (a GPT-4o-based chatbot named Zenny) designed to simulate depression self-management scenarios grounded in prior research. We conducted interviews with 17 individuals with lived experiences of depression, who interacted with Zenny during the session. We applied qualitative content analysis to interview transcripts, notes, and chat logs using sensitizing concepts related to values and harms. RESULTS: We identified 3 themes shaping participants' evaluations: (1) informational accuracy and applicability, including concerns about incorrect or misleading information, vagueness, and fit with personal constraints; (2) emotional support vs need for human connection, including validation and a judgment-free space alongside perceived limits of machine empathy; and (3) a personalization-privacy dilemma, where participants wanted more tailored guidance while withholding sensitive information and using privacy-preserving tactics. CONCLUSIONS: People with lived experience of depression evaluated LLM-based mental health chatbots through intertwined priorities of actionable information, emotional validation with clear limits, and personalization that does not require unsafe data disclosure. These findings suggest concrete design strategies to mitigate harms and support LLM-based tools as complements to, rather than replacements for, human support and recovery.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。