Abstract
In current computational models on oculomotor learning 'the' movement vector is adapted in response to targeting errors. However, for saccadic eye movements, learning exhibits a spatially distributive nature, i.e. it transfers to surrounding positions. This adaptation field resembles the topographic maps of visual and motor activity in the brain and suggests that learning does not act on the population vector but already on the level of the 2D population response. Here we present a population-based gain field model for saccade adaptation in which sensorimotor transformations are implemented as error-sensitive gain field maps that modulate the population response of visual and motor signals and of the internal saccade estimate based on corollary discharge (CD). We fit the model to saccades and visual target localizations across adaptation, showing that adaptation and its spatial transfer can be explained by locally distributive learning that operates on visual, motor and CD gain field maps. We show that 1) the scaled locality of the adaptation field is explained by population coding, 2) its radial shape is explained by error encoding in polar-angle coordinates, and 3) its asymmetry is explained by an asymmetric shape of learning rates along the amplitude dimension. Learning exhibits the highest peak rate, the widest spatial extension and a pronounced asymmetry in the motor domain, while in the visual and the internal saccade domain learning appears more localized. Moreover, our results suggest that the CD-based internal saccade representation has a response field that monitors only part of the ongoing saccade changes during learning. Our framework opens the door to study spatial generalization and interference of learning in multiple contexts.