A Case for End-Constrained Ethical Artificial Intelligence

论目标约束型伦理人工智能

阅读:1

Abstract

One natural motivation for designing artificial intelligence (AI) with ethical capacities is to mitigate the risk that powerful AI systems will harm us. Against this idea, some authors have argued we do not need ethical AI in order to prevent harm to humans. We simply need safe AI. In this paper, I consider an argument of this type and raise objections to it. In particular, I argue that the goal of implementing safety features in AI systems is a much more complicated task than such authors have acknowledged, and moreover I maintain that merely safe AI could be ethically problematic in numerous ways. Then, I show that a certain kind of ethical AI, which I call end-autonomous ethical AI, would be especially dangerous since their autonomous capacities would make possible numerous risks. Finally, I motivate the case for a specific category of ethical AI, namely, end-constrained ethical AI. I describe these systems as possessing whatever capacities would be necessary for satisfying ethical aims beyond safety while lacking end-autonomy. In short, the goal of this paper is to establish that end-constrained ethical AI occupy a desirable middle ground between merely safe AI and end-autonomous ethical AI because they allow us to secure more ethical goods than just safety, and they are also not as risky as end-autonomous ethical AI.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。