Abstract
Rapid learning is essential for flexible behavior, but its basis in the brain remains unknown. Here we introduce the PRISM plasticity rule, a unifying mechanistic model of three well-established, fast-acting synaptic plasticity rules-in hippocampus, cerebellum and mushroom body-which relies exclusively on pre-synaptic activity and an "instructive signal" from another brain area. Using a multi-region network model we show that guiding PRISM plasticity with instructive signals enables the network to quickly learn extremely flexible nonlinear dynamics underlying behaviorally relevant computations, as well as to emulate unknown external system dynamics from real-time error signals, which we demonstrate with comprehensive simulations supported by exact mathematical theory. Thus, PRISM plasticity guided by instructive signals is well-suited to rapidly learn general-purpose neural computations-in contrast to canonical Hebbian rules. Finally, we show how including this plasticity rule in artificial learning algorithms can solve long-range temporal credit assignment, a long-standing challenge in machine learning.