Flexible computation of object motion and depth based on viewing geometry inferred from optic flow

基于从光流推断出的观察几何,灵活计算物体的运动和深度。

阅读:1

Abstract

We move our eyes and head to sample the visual environment. While these movements are essential for survival, they greatly complicate the analysis of retinal image motion. Our brain must account for the visual consequences of self-motion to perceive the 3D layout and motion of objects in a scene. We show that traditional models of visual compensation for eye movements fail when the eye both translates and rotates, and we propose a theory that computes both motion and depth in more natural viewing geometries. Consistent with our theoretical predictions, humans exhibit distinct perceptual biases when different viewing geometries are simulated by optic flow, and these biases occur without training or feedback. A neural network model trained to perform the same tasks suggests that viewing geometry modulates the joint tuning of neurons for retinal and eye velocity to mediate these adaptive computations. Our findings unify previously separate bodies of work by demonstrating that the brain adaptively perceives the dynamic 3D environment according to viewing geometry inferred from optic flow.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。