Abstract
Human subjects were intermittently reinforced with money for performing correctly on a conditional matching-to-sample task. The matching performance was examined as a function of a) the duration of Time-Outs (TOs) which followed every incorrect response and b) the frequency (FR value) with which TOs followed incorrect responses. The matching accuracy increased with longer TOs and decreased with less frequent presentation of TOs.