Autotrinet YOLO triple attention framework for robust traffic sign detection

Autotrinet YOLO 三重注意力框架用于稳健的交通标志检测

阅读:1

Abstract

Traffic sign detection (TSD) remains a critical challenge in intelligent transportation systems due to factors such as small target sizes, environmental variability, and real-time constraints. While deep learning-based methods like YOLO have advanced TSD performance, existing approaches often struggle to balance accuracy, computational efficiency, and robustness across diverse scenarios. This paper proposes AutoTriNet-YOLO, a novel framework that integrates triple-attention enhancement (local, global, and sequential pathways) with dynamic feature fusion and adaptive computation to address these limitations. The core innovation lies in the TriplePathBlock module, which parallelizes Convolutional Block Attention (CBAM), Non-local Blocks, and a Lite Transformer to capture multi-scale contextual dependencies efficiently. A Dynamic Fusion Gate adaptively weights attention paths, while a Selective Insert mechanism prunes redundant operations based on input complexity. Evaluated on a comprehensive traffic sign dataset, AutoTriNet-YOLO achieves state-of-the-art performance with 86.6% mAP@50 and 65.3% mAP@50-95, outperforming existing methods like TSD-YOLO and EDN-YOLO. Ablation studies validate the contributions of each component, particularly the CBAM pathway for local feature refinement. The framework maintains real-time efficiency, making it suitable for edge deployment in autonomous driving systems. This work advances robust TSD by unifying diverse attention mechanisms into a scalable, computationally optimized architecture.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。