Improving small object detection via cross-layer attention.

通过跨层注意力机制提高小目标检测性能

阅读:5
作者:Peng Ru, Tan Guoran, Chen Xingyu, Lan Xuguang
Small object detection is a fundamental and challenging topic in the computer vision community. To detect small objects in images, several methods rely on feature pyramid networks (FPN), which can alleviate the conflict between resolution and semantic information. However, the FPN-based methods also have limitations. First, existing methods only focus only on regions with close spatial distance, hindering the effectiveness of long-range interactions. Second, element-wise addition ignores the different perceptive fields of the two feature layers, thus causing higher-level features to introduce noise to the lower-level features. To address these problems, we propose a cross-layer attention (CLA) block as a generic block for capturing long-range dependencies and reducing noise from high-level features. Specifically, the CLA block performs feature fusion by factoring in both the channel and spatial dimensions, which provides a reliable way of fusing the features from different layers. Because CLA is a lightweight and general block, it can be plugged into most feature fusion frameworks. On the COCO 2017 dataset, we validated the CLA block by plugging it into several state-of-the-art FPN-based detectors. Experiments show that our approach achieves consistent improvements in both object detection and instance segmentation, which demonstrates the effectiveness of our approach.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。