Abstract
Automated vehicles controlled by artificial intelligence are becoming capable of making moral decisions independently. This study investigates the differences in participants' perceptions of the moral decision-maker's permissibility when viewing scenarios (pre-test) and after witnessing the outcomes of moral decisions (post-test). It also investigates how permissibility, ten typical moral emotions, and perceived moral agency fluctuate when AI and the human driver make deontological or utilitarian decisions in a pedestrian-sacrificing dilemma (Experiment 1, N = 254) and a driver-sacrificing dilemma (Experiment 2, N = 269) from a third-person perspective. Moreover, by conducting binary logistic regression, this study examined whether these factors could predict the non-decrease in permissibility ratings. In both experiments, participants preferred to delegate decisions to human drivers rather than to AI, and they generally preferred utilitarianism over deontology. The results of perceived moral emotions and moral agency provide evidence. Moreover, Experiment 2 elicited greater variations in permissibility, moral emotions, and perceived moral agency compared to Experiment 1. Moreover, deontology and gratitude could positively predict the non-decrease in permissibility ratings in Experiment 1, while contempt had a negative influence. In Experiment 2, the human driver and disgust were significant negative predictor factors, while perceived moral agency had a positive influence. These findings deepen the comprehension of the dynamic processes of autonomous driving's moral decision-making and facilitate understanding of people's attitudes toward moral machines and their underlying reasons, providing a reference for developing more sophisticated moral machines.