We need better images of AI and better conversations about AI

我们需要更清晰的人工智能图像,以及更深入地探讨人工智能相关话题。

阅读:1

Abstract

In this article, we critique the ways in which the people involved in the development and application of AI systems often visualize and talk about AI systems. Often, they visualize such systems as shiny humanoid robots or as free-floating electronic brains. Such images convey misleading messages; as if AI works independently of people and can reason in ways superior to people. Instead, we propose to visualize AI systems as parts of larger, sociotechnical systems. Here, we can learn, for example, from cybernetics. Similarly, we propose that the people involved in the design and deployment of an algorithm would need to extend their conversations beyond the four boxes of the Error Matrix, for example, to critically discuss false positives and false negatives. We present two thought experiments, with one practical example in each. We propose to understand, visualize, and talk about AI systems in relation to a larger, complex reality; this is the requirement of requisite variety. We also propose to enable people from diverse disciplines to collaborate around boundary objects, for example: a drawing of an AI system in its sociotechnical context; or an 'extended' Error Matrix. Such interventions can promote meaningful human control, transparency, and fairness in the design and deployment of AI systems.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。