Abstract
The challenge of traffic sign detection and recognition for driving vehicles has become more critical with recent advances in autonomous and assisted driving technologies. Although object recognition problems, particularly traffic sign recognition, have been extensively studied, most Vision Transformer (ViT) models still rely on static attention mechanisms with fixed projection matrices (Q, K, and V). Using this mechanism limits the ViTs to handle real-world problems such as object detection and traffic sign recognition, etc. Problems, such as partially or fully obscured signs, changes in illumination, and weather conditions, result in subpar feature extraction, which compounds the misclassification problem. To overcome this challenge, a Conditional Visual Transformer (CViT) is proposed in this research, which dynamically adapts feature aggregation, Q, K, and V projections, as well as attention-based mechanisms, based on the input sign type. Its main component consists of a controlled failure deep learning model using a CViT that targets specific types of traffic signs through varying feature extraction and attention adjustments, resulting in high classification performance and minimizing misclassifications. Furthermore, an adaptive gating technique is employed that optimally adjusts the projection matrix across different traffic signs. The proposed CViT achieved an overall accuracy of 99.87%, with a Micro Precision of 99.07%, a Macro Recall of 94.3%, and a Macro F1 Score of 99.07%, respectively. These results demonstrate the potential of CViT to improve both the efficiency and reliability of traffic sign recognition in autonomous driving applications.