Public perception of accuracy-fairness trade-offs in algorithmic decisions in the United States

美国公众对算法决策中准确性与公平性权衡的看法

阅读:1

Abstract

The naive approach to preventing discrimination in algorithmic decision-making is to exclude protected attributes from the model's inputs. This approach, known as "equal treatment," aims to treat all individuals equally regardless of their demographic characteristics. However, this practice can still result in unequal impacts across different groups. Recently, alternative notions of fairness have been proposed to reduce unequal impact. However, these alternative approaches may require sacrificing predictive accuracy. The present research investigates public attitudes toward these trade-offs in the United States. When are individuals more likely to support equal treatment algorithms (ETAs), characterized by higher predictive accuracy, and when do they prefer equal impact algorithms (EIAs) that reduce performance gaps between groups? A randomized conjoint experiment and a follow-up choice experiment revealed that support for the EIAs decreased sharply as their accuracy gap grew, although impact parity was prioritized more when ETAs produced large outcome discrepancies. Additionally, preferences polarized along partisan identities, with Democrats favoring impact parity over accuracy maximization while Republicans displayed the reverse preference. Gender and social justice orientations also significantly predicted EIA support. Overall, findings demonstrate multidimensional drivers of algorithmic fairness attitudes, underscoring divisions around equality versus equity principles. Achieving standards around fair AI requires addressing conflicting human values through good governance.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。