Level-1-visual perspective taking for human and robot avatars

一级视觉视角转换(适用于人类和机器人化身)

阅读:1

Abstract

Research on level 1 visual perspective taking (L1-VPT) has been debating whether L1-VPT is an implicit socially rooted or rather a non-social process. Using online versions of the Dot Perspective Task by Samson et al. (Journal of Experimental Psychology: Human Perception and Performance, 36(5), 1255-1266, 2010) we approached this question by comparing L1-VPT for robot vs. human avatars. In line with the assumption that visual perspective taking is due to mentalizing, we predicted that perspective taking, leading to altercentric intrusions, should occur more strongly for the human avatars than for the robot avatars. In two experiments, a within-participant design was applied: 2 (avatar: human vs. robot) × 2 (avatar perspective: consistent vs. inconsistent) × 2 (task: avatar perspective vs. self-perspective). The human avatar was a male in Experiment 1 (n = 120) and a female in Experiment 2 (n = 113). The analyses of reaction times and error rates showed significant, medium to large egocentric intrusions and significant, small to medium altercentric intrusions for both avatar types, suggesting interference from the irrelevant perspective. Against the prediction, the altercentric intrusions for human avatars were not significantly larger than for robot avatars. Taking into account methodological concerns and suggesting future experimental variations, we argue that the submentalizing approach assuming that visual perspective taking is based on domain general processes provides a good explanation for our results.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。