Mirror manifolds: partially overlapping neural subspaces for speaking and listening

镜像流形:用于说话和听觉的部分重叠神经子空间

阅读:2

Abstract

Participants in conversations need to associate words with their speakers but also retain those words' general meanings. For example, someone talking about their hand is not referring to the other speaker's hand, but the word " hand " still carries speaker-general information (e.g., having five fingers). These two requirements impose a cross-speaker generalization / differentiation dilemma that is not well addressed by existing theories. We hypothesized that the brain resolves the dilemma by use of a vectorial semantic code that blends collinear and orthogonal coding subspaces. To test this hypothesis, we examined semantic encoding in populations of hippocampal single neurons recorded during conversations between epilepsy patients and healthy partners in the epilepsy monitoring unit (EMU). We found clear semantic encoding for both spoken and heard words, with strongest encoding just around the time of utterance for production, and just after it for reception. Crucially, hippocampal neurons' codes for word meaning were poised between orthogonalized and collinearized. Moreover, different semantic categories were orthogonalized to different degrees; body parts and names were most differentiated between speakers; function words and verbs were least differentiated. Finally, the hippocampus used the same coding principle to separate different partners in three-person conversations, with greater orthogonalization between self and other than between two others. Together, these results suggest a new solution to the problem of binding word meanings with speaker identity.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。