Trial-to-trial dynamics and learning in a generalized, redundant reaching task

在一般化的冗余抓取任务中,试验间的动态变化和学习

阅读:1

Abstract

If humans exploit task redundancies as a general strategy, they should do so even if the redundancy is decoupled from the physical implementation of the task itself. Here, we derived a family of goal functions that explicitly defined infinite possible redundancies between distance (D) and time (T) for unidirectional reaching. All [T, D] combinations satisfying any specific goal function defined a goal-equivalent manifold (GEM). We tested how humans learned two such functions, D/T = c (constant speed) and D·T = c, that were very different but could both be achieved by neurophysiologically and biomechanically similar reaching movements. Subjects were never explicitly shown either relationship, but only instructed to minimize their errors. Subjects exhibited significant learning and consolidation of learning for both tasks. Initial error magnitudes were higher, but learning rates were faster, for the D·T task than for the D/T task. Learning the D/T task first facilitated subsequent learning of the D·T task. Conversely, learning the D·T task first interfered with subsequent learning of the D/T task. Analyses of trial-to-trial dynamics demonstrated that subjects actively corrected deviations perpendicular to each GEM faster than deviations along each GEM to the same degree for both tasks, despite exhibiting significantly greater variance ratios for the D/T task. Variance measures alone failed to capture critical features of trial-to-trial control. Humans actively exploited these abstract task redundancies, even though they did not have to. They did not use readily available alternative strategies that could have achieved the same performance.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。