Abstract
In the last decade, several organisations, and national and international agencies have developed impact assessments (IAs) to mitigate the risks and impact of AI systems as well as to promote responsible, just and trustworthy design, development and deployment. However, through a critical review of current AI IAs, we identify the failure of these IAs to address fundamental questions regarding who defines problems, whose knowledge is valued, and who truly benefits from AI innovation or generally what we term the 'coloniality problem'. Developed primarily within Global North normative frameworks, these IAs risk perpetuating the very inequalities they aim to address by neglecting Global South perspectives and the extractive logic underpinning data practices. Thus, we propose a novel approach: Decoloniality Impact Assessment (DIA) as a critical, context-sensitive evaluative approach that assesses AI systems in relation to their inherent colonial legacies, global power asymmetries, and epistemic injustices. It moves beyond traditional ethical frameworks by interrogating how the AI innovation lifecycle and practices reinforce structural inequalities, marginalise local knowledge systems, and perpetuate exploitative systems. The paper advocates for an AI innovation lifecycle approach to DIA, recognising that coloniality manifests at every stage of AI development, from ideation to deployment. DIA is not a new impact assessment framework but an approach that can be integrated into already existing frameworks such as the Council of Europe's HUDERIA framework. It is a call to reframe AI innovation in a way that technological futures are rooted in justice, pluriversality, and sovereignty.