Analyzing the enhancement of CNN-YOLO and transformer based architectures for real-time animal detection in complex ecological environments

分析基于 CNN-YOLO 和 Transformer 架构的增强方法,以实现复杂生态环境下的实时动物检测。

阅读:1

Abstract

Automatic animal detection has become a critical capability in ecology, conservation, agriculture, and public safety, driven by the rapid growth of visual data collected through camera traps, UAVs, and remote sensors. The necessity of this study arises from the increasing demand to understand and apply these underlying detection techniques in practical domains such as animal husbandry, farming, and livestock management, where timely and accurate animal identification directly impacts productivity, welfare, and safety. Traditional convolutional neural networks (CNNs) have demonstrated strong accuracy in static or controlled environments but often face limitations in computational cost and inference speed. In contrast, the You Only Look Once (YOLO) family of one-stage detectors has revolutionized animal detection by achieving real-time performance while maintaining competitive accuracy across challenging geospatial environments. This review provides a chronological synthesis of detection approaches, tracing the evolution from handcrafted features and two-stage CNN-based models to modern YOLO architectures and transformer-enhanced frameworks. A detailed comparative analysis is presented, highlighting trade-offs in accuracy, speed, robustness, and deployment feasibility across diverse datasets, including camera trap imagery, UAV-based surveys, and satellite observations. Persistent challenges such as small-object detection, class imbalance, and limited cross-geographical generalization are discussed alongside enhancement strategies, including attention mechanisms, few-shot learning, and domain adaptation. Furthermore, practical deployment considerations are explored, with emphasis on edge computing platforms such as Jetson Nano, Coral TPU, and UAV-embedded systems. This review adopts a systematic methodology following PRISMA guidelines, covering studies published between 2015 and 2025, from which 142 were included after screening. Comparative findings show that on camera-trap datasets, transformer-augmented YOLO variants achieve up to 94% mAP under controlled illumination, while lightweight YOLOv7-SE and YOLOv8 architectures offer superior real-time performance (≥ 60 FPS) on UAV-based imagery. However, large-scale deployment remains constrained by edge-device memory limits and cross-domain generalization challenges.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。