Abstract
INTRODUCTION: Artificial intelligence ethics is often framed as a response to unprecedented technical autonomy, with risks attributed to recent advances in machine learning and scale. This framing overlooks a recurring ethical structure: the delegation of moral authority to artificial agents. Ethical failures associated with AI are best understood as governance failures rooted in human design choices and accountability arrangements, even where opacity and limited control complicate responsibility attribution. METHODS: A qualitative, interdisciplinary approach integrates historical-thematic analysis, comparative interpretation of technological artifacts, and visual-conceptual synthesis. Mythological figures (Talos, the Golem, Pygmalion), early mechanical automata, and foundational computational systems are analyzed as conceptual models of delegated artificial agency rather than technological precursors. RESULTS: Across historical contexts, artificial agents exhibit consistent structural features: bounded autonomy, delegated authority, explicit override mechanisms, and dependence on human oversight. These features directly correspond to contemporary AI ethics concerns, including alignment failures, responsibility gaps, human-in-the-loop control, and system interruptibility. DISCUSSION: The analysis establishes that ethical risk in AI arises from the displacement of human responsibility rather than from machine autonomy. By situating AI within a longer history of artificial agency, the study provides a normative framework that locates moral responsibility unambiguously in human actors and institutions, with direct implications for AI governance and accountability.