Attention to detail: A conditional multi-head transformer for traffic sign recognition

注重细节:用于交通标志识别的条件多头变压器

阅读:2

Abstract

The challenge of traffic sign detection and recognition for driving vehicles has become more critical with recent advances in autonomous and assisted driving technologies. Although object recognition problems, particularly traffic sign recognition, have been extensively studied, most Vision Transformer (ViT) models still rely on static attention mechanisms with fixed projection matrices (Q, K, and V). Using this mechanism limits the ViTs to handle real-world problems such as object detection and traffic sign recognition, etc. Problems, such as partially or fully obscured signs, changes in illumination, and weather conditions, result in subpar feature extraction, which compounds the misclassification problem. To overcome this challenge, a Conditional Visual Transformer (CViT) is proposed in this research, which dynamically adapts feature aggregation, Q, K, and V projections, as well as attention-based mechanisms, based on the input sign type. Its main component consists of a controlled failure deep learning model using a CViT that targets specific types of traffic signs through varying feature extraction and attention adjustments, resulting in high classification performance and minimizing misclassifications. Furthermore, an adaptive gating technique is employed that optimally adjusts the projection matrix across different traffic signs. The proposed CViT achieved an overall accuracy of 99.87%, with a Micro Precision of 99.07%, a Macro Recall of 94.3%, and a Macro F1 Score of 99.07%, respectively. These results demonstrate the potential of CViT to improve both the efficiency and reliability of traffic sign recognition in autonomous driving applications.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。