Abstract
The ability to ignore salient yet irrelevant stimuli is essential to accomplishing even simple tasks. Previous research has shown that observers are better able to suppress distracting stimuli via experience; yet the precise mechanisms of this learned suppression is a subject of debate. The current study (n = 230) employed a psychophysical approach combined with computational modeling to examine how learned spatial suppression affects perception and performance. The results show that items presented at suppressed locations are perceived as less bright than those in non-suppressed areas, suggesting that learned suppression directly affects the perceived saliency of items. To determine how this saliency change affects visual search, a computational modeling approach was used to compare various models of attentional selection. This analysis favored a model in which learned suppression reduces the saliency of objects presented at suppressed locations in the initial salience calculation. Since the saliency of these items is reduced, they are less able to compete for attentional processing and capture attention less often.