Large language models reflect the ideology of their creators

大型语言模型反映了其创建者的意识形态。

阅读:2

Abstract

Large language models (LLMs) already play an influential role in how humans access information. However, their behavior varies depending on their design, training, and use. We prompt a diverse panel of 19 popular LLMs to describe 3,991 prominent persons with political relevance, and then judge how positively they portray each person. When comparing these assessments, we find disparities in ideological positions between LLMs across different geopolitical regions (Arabic countries, China, Russia, and Western countries), and across different languages (the United Nations' six official languages). Moreover, among only models from the United States, we find significant normative differences related to progressive values. Among Chinese models, we characterize division between internationally- and domestically-focused models. Our results suggest that the ideological stance of an LLM reflects the worldview of its creators. This poses the risk of political instrumentalization and raises concerns around technological and regulatory efforts aiming to make LLMs ideologically 'unbiased'.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。