Abstract
This article concerns simple visual-search tasks that require people to respond "yes" or "no" about whether a specified target object is present in stimulus displays containing relatively small numbers of typically simple objects. The currently most popular cognitive theories regarding human performance in these tasks claim that a person's response time depends on the number of shifts of covert visual attention required to choose the response. Such theories provide no significant roles for cognitive task strategies, eye movements, and early-vision limitations (e.g., lower visual resolution and increased crowding effects for displayed objects with greater retinal eccentricity). In contrast, the present research used the EPIC computational cognitive architecture to construct precise simulation models that rely on these more basic mechanisms without assuming any role for covert attention. Results from the simulations show that models systematically incorporating early-vision limitations, eye movements, and parsimonious cognitive task strategies may suffice to account precisely for both the speed and accuracy of human performance during simple visual search. These models succeed at fitting not only empirical data aggregated across participants but also data from different subsets of individual participants who had similar visual parameter values and task strategies. Thus, it appears that covert-attention shifting is not necessary to explain simple visual search. Future models of visual search can be made more veridical and complete by avoiding ill-defined concepts of attention and instead further developing theories of visual mechanisms, task strategies, and motor mechanisms to explain empirical phenomena.