Neural Tracking of the Maternal Voice in the Infant Brain

婴儿大脑对母亲声音的神经追踪

阅读:1

Abstract

Infants preferentially process familiar social signals, but the neural mechanisms underlying continuous processing of maternal speech remain unclear. Using EEG-based neural encoding models based on temporal response functions, we investigated how 7-month-old human infants track maternal versus unfamiliar speech and whether this affects simultaneous face processing. Infants (13 boys, 12 girls) showed stronger neural tracking of their mother's voice, independent of acoustic properties, suggesting an early neural signature of voice familiarity. Furthermore, central encoding of unfamiliar faces was diminished when infants heard their mother's voice and face tracking accuracy at central electrodes increased with earlier occipital face tracking, suggesting heightened attentional engagement. However, we found no evidence for differential processing of happy versus fearful faces, contrasting previous findings on early emotion discrimination. Our results reveal interactive effects of voice familiarity on multimodal processing in infancy: while maternal speech enhances neural tracking, it may also alter how other social cues, such as faces, are processed. The findings suggest that early auditory experiences shape how infants allocate cognitive resources to social stimuli, emphasizing the need to consider cross-modal influences in early development.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。