Artificial Intelligence Is Stereotypically Linked More with Socially Dominant Groups in Natural Language

在自然语言中,人工智能往往与社会优势群体联系更为紧密。

阅读:2

Abstract

Despite the increasingly important role that artificial intelligence (AI) plays in society, its social representation remains underexplored. This study reveals that AI is asymmetrically stereotyped, being more closely associated with socially dominant groups. This conclusion is based on these investigations of AI's associations along the warmth-competence dimensions of the stereotype content model, its connections with socially advantaged (e.g., men, young, rich, white) versus disadvantaged (e.g., women, old, poor, non-white) demographic groups, and its perceived impact on high- versus low-prestige occupations. Across four studies using language-based analyses with static word embeddings, BERT-based models, and GPT-4o, as well as human-participant experiment validation, it is found that i) AI is strongly associated with high competence but exhibits variability in the warmth dimension (Study 1); ii) AI is more closely linked to advantaged demographic groups (Study 2); iii) advantaged demographic groups are semantically closer to AI than their disadvantaged counterparts along the warmth-competence dimensions (Study 3); and iv) high-prestige occupations (rather than low-prestige ones) are associated more strongly with AI's benefits than with its threats. Together, these findings indicate that public perceptions of AI are systematically biased toward socially dominant groups, potentially reinforcing existing social inequalities and raising concerns about an emerging "AI divide."

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。