Learning to combine top-down context and feed-forward representations under ambiguity with apical and basal dendrites

在存在歧义的情况下,学习如何将自上而下的上下文信息和前馈表征与顶端和基底树突结合起来

阅读:2

Abstract

One of the hallmark features of neocortical anatomy is the presence of extensive top-down projections into primary sensory areas. It is hypothesized that one of the roles of these top-down projections is to carry contextual information that helps animals to resolve ambiguities in sensory data. One proposed mechanism of contextual integration is a combination of input streams at distinct apical and basal dendrites of pyramidal neurons. Computationally, however, it is yet to be demonstrated how such an architecture could leverage distinct compartments for flexible contextual integration and sensory processing. Here, we implement a deep neural network with distinct apical and basal compartments that integrates (a) contextual information from top-down projections to apical compartments and (b) sensory representations driven by bottom-up projections to basal compartments. In addition, we develop a new contextual integration task using generative modeling. The performance of deep neural networks augmented with our "apical prior" exceeds that of single-compartment networks. We find that a sparse subset of neurons of the context-relevant categories receive the largest top-down signals. We further show that this sparse gain modulation is necessary. Altogether, this suggests that the "apical prior" could be key for handling the ambiguities that animals encounter in the real world.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。