Political biases and inconsistencies in bilingual GPT models-the cases of the U.S. and China

双语GPT模型中的政治偏见和不一致性——以美国和中国为例

阅读:1

Abstract

The growing popularity of ChatGPT and other large language models (LLMs) has led to many studies investigating their susceptibility to mistakes and biases. However, most studies have focused on models trained exclusively on English texts. This is one of the first studies that investigates cross-language political biases and inconsistencies in LLMs, specifically GPT models. Using two languages, English and simplified Chinese, we asked GPT the same questions about political issues in the United States (U.S.) and China. We found that the bilingual models' political knowledge and attitude were significantly more inconsistent regarding political issues in China than those in the U.S. The Chinese model was the least negative toward China's problems, whereas the English model was the most critical of China. This disparity cannot be explained by GPT model robustness. Instead, it suggests that political factors such as censorship and geopolitical tensions may have influenced LLM performance. Moreover, both the Chinese and English models tended to be less critical of the issues of their "own country," represented by the language used, than of the issues of "the other country." This suggests that multilingual GPT models could develop an "in-group bias" based on their training language. We discuss the implications of our findings for information transmission in an increasingly divided world.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。