Abstract
Past work has demonstrated that predictive information modulates how the brain responds to visual stimuli, but it is not yet clear how the brain integrates different types of predictive information to facilitate efficient perception. Here, we aim to explore how expectations about upcoming stimulus identities ("what" information) and upcoming stimulus locations ("where" information) modulate the directionality and occurrence of prediction effects in brain activity. Participants (n = 40) viewed real-world object images in rapid serial visual presentation (RSVP) streams which were predictable in terms of both object identity and stimulus location. Multivariate pattern analyses of electroencephalography (EEG) data were used to quantify and compare the degree of information represented in neural activity when stimuli were random (unpredictable), expected, or unexpected in terms of identity and location. Decoding accuracy for expected locations was significantly reduced relative to random locations between 160 and 238 ms post-onset. However, this effect subsequently reversed with decoding accuracy for expected locations becoming higher than accuracy for random locations between 273 and 430 ms. This temporally dynamic effect was not replicated within analyses decoding object identity. However, consistent evidence for reduced decoding of unexpected relative to random stimuli in later time windows (>250 ms) post-onset was identified across both stimulus types (e.g. objects and locations). These results are critically important when considered in the context of predictive coding research as they highlight important complexities in how predictability modulates neural responses.