Exploring the occupational biases and stereotypes of Chinese large language models

探索中国大型语言模型中的职业偏见和刻板印象

阅读:1

Abstract

Large Language Models (LLMs) are transforming various aspects of our daily lives and work through their generated content, known as Artificial Intelligence Generated Content (AIGC). To effectively harness this change, it is essential to understand the limitations within these models. While extensive prior research has addressed biases in OpenAI's ChatGPT, limited attention has been given to biases present in Chinese Large Language Models (C-LLMs). This study systematically examines biases in five representative C-LLMs. We collected 90 Chinese surnames derived from authoritative demographic statistics and 12 occupations covering various professional sectors as input prompts. Each prompt was generated three times by the C-LLMs, resulting in a dataset comprising 16,200 generated personal profiles. We then evaluated these profiles for biases regarding gender, region, age, and educational background. Our findings reveal that the content produced by each examined C-LLMs exhibits significant gender and regional biases, as well as age and educational stereotypes. Notably, while most models can generate some unbiased content, ChatGLM stands out as the exception. In contrast, Tongyiqianwen is the only model that may refuse to generate certain content, due to its strong privacy protection mechanisms. We also further analyze the underlying mechanisms of bias formation by examining different stages of the model lifecycle and considering the unique characteristics of the Chinese linguistic and sociocultural context. This paper will contribute substantially to the literature on biases in C-LLMs and provide important insights for users aiming to utilize these models more effectively and ethically.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。