Abstract
In a visual environment, objects are encoded within a spatial and temporal context. The present study investigated whether incidental learning of spatial and temporal associations in hybrid visual and memory search enable observers to predict targets in space and time. In three experiments, observers looked for four, previously memorized, target items across many trials. We examined effects of learning target item sequences (e.g., the butterfly always follows the paint box), target item-location associations (the butterfly is always in the right corner), and target item-location sequences (the butterfly in the right corner always follows the paint box in the lower middle-left). We found only weak incidental learning for the sequences of target items alone. By contrast, we found good learning of target item-location associations. Furthermore, we did find a reliable effect of sequence learning for target item-location associations. These findings suggest that spatiotemporal learning in hybrid search is hierarchical: Only if spatial and non-spatial target features are bound, temporal associations can bias attention dynamically to the task-relevant features expected to occur next.