Delegated agency and moral responsibility in artificial intelligence

人工智能中的授权委托和道德责任

阅读:1

Abstract

INTRODUCTION: Artificial intelligence ethics is often framed as a response to unprecedented technical autonomy, with risks attributed to recent advances in machine learning and scale. This framing overlooks a recurring ethical structure: the delegation of moral authority to artificial agents. Ethical failures associated with AI are best understood as governance failures rooted in human design choices and accountability arrangements, even where opacity and limited control complicate responsibility attribution. METHODS: A qualitative, interdisciplinary approach integrates historical-thematic analysis, comparative interpretation of technological artifacts, and visual-conceptual synthesis. Mythological figures (Talos, the Golem, Pygmalion), early mechanical automata, and foundational computational systems are analyzed as conceptual models of delegated artificial agency rather than technological precursors. RESULTS: Across historical contexts, artificial agents exhibit consistent structural features: bounded autonomy, delegated authority, explicit override mechanisms, and dependence on human oversight. These features directly correspond to contemporary AI ethics concerns, including alignment failures, responsibility gaps, human-in-the-loop control, and system interruptibility. DISCUSSION: The analysis establishes that ethical risk in AI arises from the displacement of human responsibility rather than from machine autonomy. By situating AI within a longer history of artificial agency, the study provides a normative framework that locates moral responsibility unambiguously in human actors and institutions, with direct implications for AI governance and accountability.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。