MEP-YOLOv5s: Small-Target Detection Model for Unmanned Aerial Vehicle-Captured Images

MEP-YOLOv5s:无人机拍摄图像的小目标检测模型

阅读:1

Abstract

Due to complex backgrounds, significant scale variations of targets, and dense distributions of small objects in Unmanned Aerial Vehicle (UAV) aerial images, traditional object detection algorithms face challenges in adapting to such scenarios. This article introduces a drone detection model, MEP-YOLOv5s, which optimizes the Backbone, Neck layer, and C3 module based on YOLOv5s, and combines effective attention mechanisms to improve the training efficiency of the model by replacing the traditional CIoU loss (Complete Intersection over Union) with MPDIoU (Minimum Point Distance-based Intersection over Union) loss. This model demonstrates an excellent performance in handling typical drone detection scenarios, especially for small and dense objects. To holistically balance the detection accuracy and inference efficiency, we propose a Comprehensive Performance Indicator (CPI), which evaluates the model performance by considering both accuracy and efficiency. Evaluations on the VisDrone2019 dataset demonstrate that MEP-YOLOv5s achieves a 3.3% improvement in precision (P), a 20.9% increase in mAP@0.5, and a 19.86% gain in the CPI (α = 0.5) compared with the baseline model. Additional experiments on the NWPU VHR-10 dataset confirm that MEP-YOLOv5s outperforms the existing state-of-the-art methods, offering a robust solution for UAV-based small object detection with enhanced feature extraction and attention-driven adaptability.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。