Abstract
Should developers be held responsible for the predictions of their neural networks-and if not, does that introduce a responsibility gap? The claim that neural networks introduce a responsibility gap has seen significant pushback, with philosophers arguing that the gap can be bridged, or did not exist in the first place. We show how the responsibility gap turns on whether we can distinguish between foreseeable and unforeseeable neural network predictions. Empirical facts about neural networks tell us we cannot, which seems to force developers to either assume full responsibility or no responsibility at all, introducing a responsibility gap-unless, of course, the same empirical facts hold true of humans, in which case there is no gap, but the trouble is simply with the classical notion of responsibility. We revisit and revise Mele's Zygote, as well as the famous Palsgraf case, and argue that in fact, what complicates responsibility assignment for neural networks also complicates responsibility assignment for humans, and humans seem to confront us with the same all-or-nothing dilemma. Thus, we agree there is no technology-induced responsibility gap (there was no gap in the first place), but for slightly different reasons than our predecessors.