Keypoint-based modeling reveals fine-grained body pose tuning in superior temporal sulcus neurons

基于关键点的建模揭示了颞上沟神经元中精细的身体姿态调节

阅读:1

Abstract

Body pose and orientation serve as vital visual signals in primate non-verbal social communication. Leveraging deep learning algorithms that extract body poses from videos of behaving monkeys, applied to a monkey avatar, we investigated neural tuning for pose and viewpoint, targeting fMRI-defined mid and anterior Superior Temporal Sulcus (STS) body patches. We modeled the pose and viewpoint selectivity of the units with keypoint-based principal component regression with cross-validation and applied model inversion as a key approach to identify effective body parts and views. Mid STS units were effectively modeled using view-dependent 2D keypoint representations, revealing that their responses were driven by specific body parts that differed among neurons. Some anterior STS units exhibited better predictive performances with a view-dependent 3D model. On average, anterior STS units were better fitted by a keypoint-based model incorporating mirror-symmetric viewpoint tuning than by view-dependent 2D and 3D keypoint models. However, in both regions, a view-independent keypoint model resulted in worse predictive performance. This keypoint-based approach provides insights into how the primate visual system encodes socially relevant body cues, deepening our understanding of body pose representation in the STS.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。