Decoloniality impact assessment for AI

人工智能去殖民化影响评估

阅读:1

Abstract

In the last decade, several organisations, and national and international agencies have developed impact assessments (IAs) to mitigate the risks and impact of AI systems as well as to promote responsible, just and trustworthy design, development and deployment. However, through a critical review of current AI IAs, we identify the failure of these IAs to address fundamental questions regarding who defines problems, whose knowledge is valued, and who truly benefits from AI innovation or generally what we term the 'coloniality problem'. Developed primarily within Global North normative frameworks, these IAs risk perpetuating the very inequalities they aim to address by neglecting Global South perspectives and the extractive logic underpinning data practices. Thus, we propose a novel approach: Decoloniality Impact Assessment (DIA) as a critical, context-sensitive evaluative approach that assesses AI systems in relation to their inherent colonial legacies, global power asymmetries, and epistemic injustices. It moves beyond traditional ethical frameworks by interrogating how the AI innovation lifecycle and practices reinforce structural inequalities, marginalise local knowledge systems, and perpetuate exploitative systems. The paper advocates for an AI innovation lifecycle approach to DIA, recognising that coloniality manifests at every stage of AI development, from ideation to deployment. DIA is not a new impact assessment framework but an approach that can be integrated into already existing frameworks such as the Council of Europe's HUDERIA framework. It is a call to reframe AI innovation in a way that technological futures are rooted in justice, pluriversality, and sovereignty.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。