Performance of Deaf Participants in an Abstract Visual Grammar Learning Task at Multiple Formal Levels: Evaluating the Auditory Scaffolding Hypothesis

聋人参与者在多层次抽象视觉语法学习任务中的表现:评估听觉支架假设

阅读:1

Abstract

Previous research has hypothesized that human sequential processing may be dependent upon hearing experience (the "auditory scaffolding hypothesis"), predicting that sequential rule learning abilities should be hindered by congenital deafness. To test this hypothesis, we compared deaf signer and hearing individuals' ability to acquire rules of different computational complexity in a visual artificial grammar learning task using sequential stimuli. As a group, deaf participants succeeded at all levels of the task; Bayesian analysis indicates that they successfully acquired each of several target grammars at ascending levels of the formal language hierarchy. Overall, these results do not support the auditory scaffolding hypothesis. However, age- and education-matched hearing participants did outperform deaf participants in two out of three tested grammars. We suggest that this difference may be related to verbal recoding strategies in the two groups. Any verbal recoding strategies used by the deaf signers would be less effective because they would have to use the same visual channel required for the experimental task.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。