Abstract
PURPOSE: In everyday life, we learn to find relevant objects in complex three-dimensional environments. This spatial learning is even more important under conditions of visual impairment. Yet, little is known about how spatial learning and impaired vision interact to shape search behavior in complex naturalistic settings. Here we assessed how spatial learning is used to mitigate the consequences of acute (simulated) central and peripheral vision loss for navigating an everyday three-dimensional environment. METHODS: Seventy-five participants were assigned to one of three simulated vision conditions (full vision, central mask, or peripheral mask) and performed multiple searches for products in a virtual reality supermarket. RESULTS: Task completion times and navigational efficiency were affected by reduced vision but improved substantially for repeated product searches. Improvements were more pronounced with simulated vision loss, especially with simulated peripheral loss. Gaze and other orienting behavior also changed with learning, leading to more scanning at the initiation of the search under peripheral loss, and less scanning during the actual search, particularly with central loss. CONCLUSIONS: Spatial learning affects visual orienting behavior and aids in compensating for the detrimental consequences of vision loss in everyday search behavior. TRANSLATIONAL RELEVANCE: These results emphasize the importance of spatial consistencies in dealing with visual impairments in everyday environments.