Multiple mechanisms of visual prediction as revealed by the timecourse of scene-object facilitation

场景-物体促进的时间进程揭示了多种视觉预测机制

阅读:1

Abstract

Not only semantic, but also recently learned arbitrary associations have the potential to facilitate visual processing in everyday life-for example, knowledge of a (moveable) object's location at a specific time may facilitate visual processing of that object. In our prior work, we showed that previewing a scene can facilitate processing of recently associated objects at the level of visual analysis (Smith and Federmeier in Journal of Cognitive Neuroscience, 32(5):783-803, 2020). In the current study, we assess how rapidly this facilitation unfolds by manipulating scene preview duration. We then compare our results to studies using well-learned object-scene associations in a first-pass assessment of whether systems consolidation might speed up high-level visual prediction. In two ERP experiments (N = 60), we had participants study categorically organized novel object-scene pairs in an explicit paired associate learning task. At test, we varied contextual pre-exposure duration, both between (200 vs. 2500 ms) and within subjects (0-2500 ms). We examined the N300, an event-related potential component linked to high-level visual processing of objects and scenes and found that N300 effects of scene congruity increase with longer scene previews, up to approximately 1-2 s. Similar results were obtained for response times and in a separate component-neutral ERP analysis of visual template matching. Our findings contrast with prior evidence that scenes can rapidly facilitate visual processing of commonly associated objects. This raises the possibility that systems consolidation might mediate different kinds of predictive processing with different temporal profiles.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。