Seeing speech: Neural mechanisms of cued speech perception in prelingually deaf and hearing users

视觉语音:语前聋人和听力正常者对提示性语音感知的神经机制

阅读:1

Abstract

For many deaf people, lip-reading plays a major role in verbal communication. However, lip movements are by nature ambiguous, so that lip-reading does not allow for a full understanding of speech. The resulting language access difficulties may have serious consequences on language, cognitive and social development. Cued speech (CS) was developed to eliminate this ambiguity by complementing lip-reading with hand gestures, giving access to the entire phonological content of speech through the visual modality alone. Despite its proven efficiency for improving linguistic and communicative abilities, the mechanisms of CS perception remain largely unknown. The goal of the present study is to delineate the brain regions involved in CS perception and identify their role in visual and language-related processes. Three matched groups of participants were scanned during the presentation of videos of silent CS sentences, isolated lip movements, isolated gestures, plus CS sentences with speech sounds, and meaningless CS sentences: Prelingually deaf users of CS, hearing users of CS, and naïve hearing controls. We delineated a number of mostly left-hemisphere brain regions involved in CS perception. We first found that language areas were activated in all groups by both silent CS sentences and isolated lip movements, and by gestures in deaf participants only. Despite overlapping activations when perceiving CS, several findings differentiated experts from novices. The Visual Word Form Area, which supports the interface between vision and language during reading, was activated by isolated gestures in deaf CS users. In contrast, the Bayes factor indicated either weak evidence of no activation or negligible evidence of activation in hearing and control groups. Moreover, the integration of lip movements and gestures took place in a temporal language-related region in deaf users, and in movement-related regions in hearing users, reflecting their different profile of expertise in CS comprehension and production. Finally, we observed a strong involvement of the Dorsal Attentional Network in hearing users of CS, and identified the neural correlates of the variability in individual proficiency. Cued speech constitutes a novel pathway for accessing core language processes, halfway between speech perception and reading. The current study provides a delineation of the common and specific brain structures supporting those different modalities of language input, paving the way for further research.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。