Automatic road damage recognition based on improved YOLOv11 with multi-scale feature extraction and fusion attention mechanism

基于改进YOLOv11的多尺度特征提取和融合注意力机制的自动道路损伤识别

阅读:1

Abstract

Rapid urbanization and growing traffic volumes have increased the demand for efficient and accurate road damage detection to ensure traffic safety and optimize maintenance. Traditional manual and vehicle-mounted inspection methods are often inefficient, costly, and prone to error. Deep learning-based approaches have made progress but still face challenges in detecting small objects, handling complex backgrounds, and meeting real-time requirements due to high computational costs and limited generalization. This study proposes an improved road damage detection method based on YOLOv11, incorporating a Tiny Object Detection Layer for enhanced small object recognition through high-resolution and multi-scale feature fusion. A Global Attention Mechanism is integrated to emphasize critical regions and suppress background noise. Additionally, lightweight convolution modules (C3k2CrossConv and C3k2Ghost) optimize the network to reduce computational complexity and improve inference speed. Experimental results on the RDD2022 dataset show that the YOLOv11-ATL model achieves 3.2% and 3.1% gains in mAP@0.50 and mAP@0.50:0.95, respectively, demonstrating robust performance in complex environments while maintaining a favorable balance between accuracy and efficiency. Overall, the proposed approach offers a practical and effective solution for intelligent road damage detection, supporting urban infrastructure management and intelligent transportation systems.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。