Joint Population Coding and Temporal Coherence Link an Attended Talker's Voice and Location Features in Naturalistic Multi-talker Scenes

联合群体编码和时间一致性将自然多说话者场景中受关注说话者的声音和位置特征联系起来

阅读:1

Abstract

Listeners effortlessly extract multi-dimensional auditory objects, such as a localized talker, from complex acoustic scenes. However, the neural mechanisms that enable simultaneous encoding and linking of distinct sound features-such as a talker's voice and location-are not fully understood. Using invasive intracranial recordings in seven neurosurgical patients (four male, three female), we investigated how the human auditory cortex processes and integrates these features during naturalistic multi-talker scenes and how attentional mechanisms modulate such feature integration. We found that cortical sites exhibit a continuum of feature sensitivity, ranging from single-feature-sensitive sites (responsive primarily to voice spectral features or to location features) to dual-feature-sensitive sites (responsive to both features). At the population level, neural response patterns from both single- and dual-feature-sensitive sites jointly encoded the attended talker's voice and location. Notably, single-feature-sensitive sites encoded their primary feature with greater precision but also represented coarse information about the secondary feature. Sites selectively tracking a single, attended speech stream concurrently encoded both voice and location features, demonstrating a link between selective attention and feature integration. Additionally, attention selectively enhanced temporal coherence between voice- and location-sensitive sites, suggesting that temporal synchronization serves as a mechanism for linking these features. Our findings highlight two complementary neural mechanisms-joint population coding and temporal coherence-that enable the integration of voice and location features in the auditory cortex. These results provide new insights into the distributed, multi-dimensional nature of auditory object formation during active listening in complex environments.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。