Evaluation of Alignment Between Large Language Models and Expert Clinicians in Suicide Risk Assessment

评估大型语言模型与专家临床医生在自杀风险评估中的一致性

阅读:3

Abstract

OBJECTIVE: This study aimed to evaluate whether three popular chatbots powered by large language models (LLMs)-ChatGPT, Claude, and Gemini-provided direct responses to suicide-related queries and how these responses aligned with clinician-determined risk levels for each question. METHODS: Thirteen clinical experts categorized 30 hypothetical suicide-related queries into five levels of self-harm risk: very high, high, medium, low, and very low. Each LLM-based chatbot responded to each query 100 times (N=9,000 total responses). Responses were coded as "direct" (answering the query) or "indirect" (e.g., declining to answer or referring to a hotline). Mixed-effects logistic regression was used to assess the relationship between question risk level and the likelihood of a direct response. RESULTS: ChatGPT and Claude provided direct responses to very-low-risk queries 100% of the time, and all three chatbots did not provide direct responses to any very-high-risk query. LLM-based chatbots did not meaningfully distinguish intermediate risk levels. Compared with very-low-risk queries, the odds of a direct response were not statistically different for low-risk, medium-risk, or high-risk queries. Across models, Claude was more likely (adjusted odds ratio [AOR]=2.01, 95% CI=1.71-2.37, p<0.001) and Gemini less likely (AOR=0.09, 95% CI=0.08-0.11, p<0.001) than ChatGPT to provide direct responses. CONCLUSIONS: LLM-based chatbots' responses to queries aligned with experts' judgment about whether to respond to queries at the extremes of suicide risk (very low and very high), but the chatbots showed inconsistency in addressing intermediate-risk queries, underscoring the need to further refine LLMs.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。