Attention-enhanced StrongSORT for robust vehicle tracking in complex environments

针对复杂环境的稳健车辆跟踪,我们采用了一种注意力增强型 StrongSORT 方法

阅读:1

Abstract

While multi-object tracking is critical for autonomous driving systems, traditional algorithms exhibit three fundamental limitations in complex scenarios: (1) blurred feature representation under occlusion and re-identification scenarios causing identity switches, (2) insufficient sensitivity to scale-variant targets due to fixed geometric constraints in conventional IoU-based loss functions, and (3) gradient degradation in deep convolutional layers hindering discriminative feature learning. To address these challenges, we propose AE-StrongSORT (Attention-Enhanced StrongSORT), an attention-enhanced tracking framework featuring three systematic innovations: first, the GAM-YOLO (global attention mechanism-YOLO)hybrid architecture integrates multi-scale feature fusion with a global attention mechanism (GC2f structure). This design enhances cross-dimensional feature interaction through localized channel-spatial attention gates, significantly improving occlusion-resistant feature representation (IDF1 ↑ 9.99%, IDsw ↓ 9.85%). Second, the F-EIoU loss function introduces dynamic size-dependent penalty terms and difficulty-adaptive weighting factors, effectively balancing learning priorities between small targets and normal instances. Third, the optimized CBH-Conv module employs Hardswish activation and depthwise separable convolution to mitigate gradient vanishing while maintaining real-time efficiency (achieving a 17% MOTA improvement at 213 FPS).Evaluated on the MOT-16 dataset, AE-StrongSORT demonstrates substantial improvements over the baseline StrongSORT, with 17%, 2.78%, and 9.99% gains in MOTA, HOTA, and IDF1 metrics respectively, alongside significant reductions in false/missed detections. These advances establish a novel technical pathway for robust vehicle tracking in real-world traffic scenarios characterized by coexisting challenges of scale variation, motion blur, and dense occlusion.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。