Approaching human visual perception through AI-based representation of figure-ground segregation

通过基于人工智能的图形-背景分离表征来模拟人类视觉感知

阅读:1

Abstract

INTRODUCTION: Understanding how the visual system assigns borders to foreground objects is central to figure-ground perception, yet the computational principles underlying this process are still under investigation. METHODS: We trained multiple convolutional neural network (CNN) architectures on simple overlapping/occlusion stimuli and tested them on systematically degraded contours to probe how border-ownership (BOS) inference depends on available border context. RESULTS: Across networks, BOS could be inferred from feedforward computations even under degraded conditions, but performance showed a strong dependence on junction-like configurations, indicating that geometric context contributes more than isolated edges. Accuracy increased approximately linearly with the amount of contextual information provided by fragmented borders, and representation analyses revealed a hierarchical progression from local edge responses to more spatially coherent, BOS-specific features. DISCUSSION: Together, these results delineate which aspects of BOS can emerge from hierarchical feedforward processing and suggest that additional mechanisms such as horizontal and feedback interactions may reduce the visual information required for robust figure-ground segregation.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。