Abstract
In a number of policy, institutional, activist and advocacy contexts, attributing bias to an algorithm does not just describe the algorithm but also imposes a particular, normatively laden conception of bias on others. Given the normative content of such bias attributions, this would involve making moral demands on others to rectify the algorithm, compensate the victims of such bias and/or not unselectively deploy the algorithm. It is also the case that moral demands, especially in the above-mentioned contexts, are subject to a public justification requirement. As it turns out, the dominant accounts of bias in the literature presuppose some version of egalitarianism about justice and that any action that causally contributes to an unjust situation is itself wrong. Since these presuppositions are subject to reasonable disagreement, bias attributions in such situations are wrong because they violate the public justification requirement. In response, we develop a publicly justifiable conception of algorithmic bias.