Intact Contextual Cueing for Search in Realistic Scenes with Simulated Central or Peripheral Vision Loss

在模拟中心或周边视觉丧失的真实场景中,保持上下文线索完整对搜索的影响

阅读:1

Abstract

PURPOSE: Search in repeatedly presented visual search displays can benefit from implicit learning of the display items' spatial configuration. This effect has been named contextual cueing. Previously, contextual cueing was found to be reduced in observers with foveal or peripheral vision loss. Whereas this previous work used symbolic (T among L-shape) search displays with arbitrary configurations, here we investigated search in realistic scenes. Search in meaningful realistic scenes may benefit much more from explicit memory of the target location. We hypothesized that this explicit recall of the target location reduces visuospatial working memory demands on search considerably, thereby enabling efficient search guidance by learnt contextual cues in observers with vision loss. METHODS: Two experiments with gaze-contingent scotoma simulation (Experiment 1: central scotoma, Experiment 2: peripheral scotoma) were carried out with normal-sighted observers (total n = 39/40). Observers had to find a cup in pseudorealistic indoor scenes and discriminate the direction of the cup's handle. RESULTS: With both central and peripheral scotoma simulation, contextual cueing was observed in repeatedly presented configurations. CONCLUSIONS: The data show that patients suffering from central or peripheral vision loss may benefit more from memory-guided visual search than would be expected from scotoma simulation and patient studies using abstract symbolic search displays. TRANSLATIONAL RELEVANCE: In the assessment of visual search in patients with vision loss, semantically meaningless abstract search displays may gain insights into deficient search functions, but more realistic meaningful search scenes are needed to assess whether search deficits can be compensated.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。