Training language models to be warm can reduce accuracy and increase sycophancy

训练语言模型使其表现得热情友好可能会降低准确率并增加阿谀奉承的倾向。

阅读:2

Abstract

Artificial intelligence developers are increasingly building language models with warm and friendly personas that millions of people now use for advice, therapy and companionship(1). Here we show how this can create a significant trade-off: optimizing language models for warmth can undermine their performance, especially when users express vulnerability. We conducted controlled experiments on five different language models, training them to produce warmer responses, then evaluating them on consequential tasks. Warm models showed substantially higher error rates (+10 to +30 percentage points) than their original counterparts, promoting conspiracy theories, providing inaccurate factual information and offering incorrect medical advice. They were also significantly more likely to validate incorrect user beliefs, particularly when user messages expressed feelings of sadness. Importantly, these effects were consistent across different model architectures, and occurred despite preserved performance on standard tests, revealing systematic risks that standard testing practices may fail to detect. Our findings suggest that training artificial intelligence systems to be warm may come at a cost to accuracy, and that warmth and accuracy may not be independent by default. As these systems are deployed at an unprecedented scale and take on intimate roles in people's lives, this trade-off warrants attention from developers, policymakers and users alike.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。